00:00:00.000 Started by upstream project "autotest-per-patch" build number 126244 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.017 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.018 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.030 Fetching changes from the remote Git repository 00:00:00.034 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.051 Using shallow fetch with depth 1 00:00:00.051 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.051 > git --version # timeout=10 00:00:00.071 > git --version # 'git version 2.39.2' 00:00:00.071 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.089 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.089 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.554 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.565 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.576 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:02.576 > git config core.sparsecheckout # timeout=10 00:00:02.586 > git read-tree -mu HEAD # timeout=10 00:00:02.601 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:02.619 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:02.619 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:02.719 [Pipeline] Start of Pipeline 00:00:02.734 [Pipeline] library 00:00:02.736 Loading library shm_lib@master 00:00:02.736 Library shm_lib@master is cached. Copying from home. 00:00:02.755 [Pipeline] node 00:00:02.762 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:02.764 [Pipeline] { 00:00:02.775 [Pipeline] catchError 00:00:02.776 [Pipeline] { 00:00:02.788 [Pipeline] wrap 00:00:02.796 [Pipeline] { 00:00:02.802 [Pipeline] stage 00:00:02.803 [Pipeline] { (Prologue) 00:00:03.001 [Pipeline] sh 00:00:03.278 + logger -p user.info -t JENKINS-CI 00:00:03.295 [Pipeline] echo 00:00:03.297 Node: WFP8 00:00:03.303 [Pipeline] sh 00:00:03.597 [Pipeline] setCustomBuildProperty 00:00:03.612 [Pipeline] echo 00:00:03.614 Cleanup processes 00:00:03.620 [Pipeline] sh 00:00:03.901 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.901 3906559 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:03.916 [Pipeline] sh 00:00:04.196 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.196 ++ grep -v 'sudo pgrep' 00:00:04.196 ++ awk '{print $1}' 00:00:04.196 + sudo kill -9 00:00:04.196 + true 00:00:04.210 [Pipeline] cleanWs 00:00:04.220 [WS-CLEANUP] Deleting project workspace... 00:00:04.220 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.226 [WS-CLEANUP] done 00:00:04.230 [Pipeline] setCustomBuildProperty 00:00:04.245 [Pipeline] sh 00:00:04.526 + sudo git config --global --replace-all safe.directory '*' 00:00:04.612 [Pipeline] httpRequest 00:00:04.640 [Pipeline] echo 00:00:04.641 Sorcerer 10.211.164.101 is alive 00:00:04.647 [Pipeline] httpRequest 00:00:04.650 HttpMethod: GET 00:00:04.651 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.651 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:04.672 Response Code: HTTP/1.1 200 OK 00:00:04.673 Success: Status code 200 is in the accepted range: 200,404 00:00:04.674 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.007 [Pipeline] sh 00:00:09.288 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.305 [Pipeline] httpRequest 00:00:09.329 [Pipeline] echo 00:00:09.331 Sorcerer 10.211.164.101 is alive 00:00:09.337 [Pipeline] httpRequest 00:00:09.341 HttpMethod: GET 00:00:09.342 URL: http://10.211.164.101/packages/spdk_f8598a71feda976fd71f88dd27285aed90c31ff9.tar.gz 00:00:09.342 Sending request to url: http://10.211.164.101/packages/spdk_f8598a71feda976fd71f88dd27285aed90c31ff9.tar.gz 00:00:09.358 Response Code: HTTP/1.1 200 OK 00:00:09.359 Success: Status code 200 is in the accepted range: 200,404 00:00:09.359 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_f8598a71feda976fd71f88dd27285aed90c31ff9.tar.gz 00:00:51.151 [Pipeline] sh 00:00:51.435 + tar --no-same-owner -xf spdk_f8598a71feda976fd71f88dd27285aed90c31ff9.tar.gz 00:00:53.982 [Pipeline] sh 00:00:54.265 + git -C spdk log --oneline -n5 00:00:54.265 f8598a71f bdev/uring: use util functions in bdev_uring_check_zoned_support 00:00:54.265 4903ec649 ublk: use spdk_read_sysfs_attribute_uint32 to get max ublks 00:00:54.265 94c9ab717 util: add spdk_read_sysfs_attribute_uint32 00:00:54.265 a940d3681 util: add spdk_read_sysfs_attribute 00:00:54.265 f604975ba doc: fix deprecation.md typo 00:00:54.277 [Pipeline] } 00:00:54.294 [Pipeline] // stage 00:00:54.303 [Pipeline] stage 00:00:54.305 [Pipeline] { (Prepare) 00:00:54.324 [Pipeline] writeFile 00:00:54.340 [Pipeline] sh 00:00:54.622 + logger -p user.info -t JENKINS-CI 00:00:54.635 [Pipeline] sh 00:00:54.918 + logger -p user.info -t JENKINS-CI 00:00:54.929 [Pipeline] sh 00:00:55.211 + cat autorun-spdk.conf 00:00:55.211 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.211 SPDK_TEST_NVMF=1 00:00:55.211 SPDK_TEST_NVME_CLI=1 00:00:55.211 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.211 SPDK_TEST_NVMF_NICS=e810 00:00:55.211 SPDK_TEST_VFIOUSER=1 00:00:55.211 SPDK_RUN_UBSAN=1 00:00:55.211 NET_TYPE=phy 00:00:55.218 RUN_NIGHTLY=0 00:00:55.223 [Pipeline] readFile 00:00:55.251 [Pipeline] withEnv 00:00:55.253 [Pipeline] { 00:00:55.267 [Pipeline] sh 00:00:55.551 + set -ex 00:00:55.551 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:55.551 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:55.551 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.551 ++ SPDK_TEST_NVMF=1 00:00:55.551 ++ SPDK_TEST_NVME_CLI=1 00:00:55.551 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.551 ++ SPDK_TEST_NVMF_NICS=e810 00:00:55.551 ++ SPDK_TEST_VFIOUSER=1 00:00:55.551 ++ SPDK_RUN_UBSAN=1 00:00:55.551 ++ NET_TYPE=phy 00:00:55.551 ++ RUN_NIGHTLY=0 00:00:55.551 + case $SPDK_TEST_NVMF_NICS in 00:00:55.551 + DRIVERS=ice 00:00:55.551 + [[ tcp == \r\d\m\a ]] 00:00:55.551 + [[ -n ice ]] 00:00:55.551 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:55.551 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:55.551 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:55.551 rmmod: ERROR: Module irdma is not currently loaded 00:00:55.551 rmmod: ERROR: Module i40iw is not currently loaded 00:00:55.551 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:55.551 + true 00:00:55.551 + for D in $DRIVERS 00:00:55.551 + sudo modprobe ice 00:00:55.551 + exit 0 00:00:55.561 [Pipeline] } 00:00:55.580 [Pipeline] // withEnv 00:00:55.586 [Pipeline] } 00:00:55.605 [Pipeline] // stage 00:00:55.615 [Pipeline] catchError 00:00:55.617 [Pipeline] { 00:00:55.630 [Pipeline] timeout 00:00:55.631 Timeout set to expire in 50 min 00:00:55.632 [Pipeline] { 00:00:55.648 [Pipeline] stage 00:00:55.650 [Pipeline] { (Tests) 00:00:55.696 [Pipeline] sh 00:00:55.980 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:55.980 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:55.980 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:55.980 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:55.980 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:55.980 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:55.980 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:55.980 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:55.980 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:55.980 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:55.980 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:55.980 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:55.980 + source /etc/os-release 00:00:55.980 ++ NAME='Fedora Linux' 00:00:55.980 ++ VERSION='38 (Cloud Edition)' 00:00:55.980 ++ ID=fedora 00:00:55.980 ++ VERSION_ID=38 00:00:55.980 ++ VERSION_CODENAME= 00:00:55.980 ++ PLATFORM_ID=platform:f38 00:00:55.980 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:55.980 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:55.980 ++ LOGO=fedora-logo-icon 00:00:55.980 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:55.980 ++ HOME_URL=https://fedoraproject.org/ 00:00:55.980 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:55.980 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:55.980 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:55.980 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:55.980 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:55.980 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:55.980 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:55.980 ++ SUPPORT_END=2024-05-14 00:00:55.980 ++ VARIANT='Cloud Edition' 00:00:55.980 ++ VARIANT_ID=cloud 00:00:55.980 + uname -a 00:00:55.980 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:55.980 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:58.515 Hugepages 00:00:58.515 node hugesize free / total 00:00:58.515 node0 1048576kB 0 / 0 00:00:58.515 node0 2048kB 0 / 0 00:00:58.515 node1 1048576kB 0 / 0 00:00:58.515 node1 2048kB 0 / 0 00:00:58.515 00:00:58.515 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:58.515 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:58.515 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:58.515 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:58.515 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:58.515 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:58.515 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:58.515 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:58.515 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:58.515 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:58.515 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:58.515 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:58.515 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:58.515 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:58.515 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:58.515 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:58.515 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:58.515 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:58.515 + rm -f /tmp/spdk-ld-path 00:00:58.515 + source autorun-spdk.conf 00:00:58.515 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.515 ++ SPDK_TEST_NVMF=1 00:00:58.515 ++ SPDK_TEST_NVME_CLI=1 00:00:58.515 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.515 ++ SPDK_TEST_NVMF_NICS=e810 00:00:58.515 ++ SPDK_TEST_VFIOUSER=1 00:00:58.515 ++ SPDK_RUN_UBSAN=1 00:00:58.515 ++ NET_TYPE=phy 00:00:58.515 ++ RUN_NIGHTLY=0 00:00:58.515 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:58.515 + [[ -n '' ]] 00:00:58.515 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:58.515 + for M in /var/spdk/build-*-manifest.txt 00:00:58.515 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:58.515 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:58.515 + for M in /var/spdk/build-*-manifest.txt 00:00:58.515 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:58.515 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:58.515 ++ uname 00:00:58.515 + [[ Linux == \L\i\n\u\x ]] 00:00:58.515 + sudo dmesg -T 00:00:58.515 + sudo dmesg --clear 00:00:58.515 + dmesg_pid=3907481 00:00:58.515 + [[ Fedora Linux == FreeBSD ]] 00:00:58.515 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:58.515 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:58.515 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:58.515 + [[ -x /usr/src/fio-static/fio ]] 00:00:58.515 + export FIO_BIN=/usr/src/fio-static/fio 00:00:58.515 + FIO_BIN=/usr/src/fio-static/fio 00:00:58.515 + sudo dmesg -Tw 00:00:58.515 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:58.515 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:58.515 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:58.515 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:58.515 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:58.515 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:58.515 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:58.515 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:58.515 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:58.515 Test configuration: 00:00:58.515 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.515 SPDK_TEST_NVMF=1 00:00:58.515 SPDK_TEST_NVME_CLI=1 00:00:58.515 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:58.515 SPDK_TEST_NVMF_NICS=e810 00:00:58.515 SPDK_TEST_VFIOUSER=1 00:00:58.515 SPDK_RUN_UBSAN=1 00:00:58.515 NET_TYPE=phy 00:00:58.515 RUN_NIGHTLY=0 22:17:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:58.515 22:17:22 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:58.515 22:17:22 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:58.515 22:17:22 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:58.515 22:17:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:58.515 22:17:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:58.515 22:17:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:58.515 22:17:22 -- paths/export.sh@5 -- $ export PATH 00:00:58.515 22:17:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:58.515 22:17:22 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:58.515 22:17:22 -- common/autobuild_common.sh@444 -- $ date +%s 00:00:58.515 22:17:22 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721074642.XXXXXX 00:00:58.515 22:17:22 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721074642.YL97ir 00:00:58.515 22:17:22 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:00:58.515 22:17:22 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:00:58.515 22:17:22 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:58.515 22:17:22 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:58.515 22:17:22 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:58.515 22:17:22 -- common/autobuild_common.sh@460 -- $ get_config_params 00:00:58.515 22:17:22 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:58.515 22:17:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:58.515 22:17:22 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:58.515 22:17:22 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:00:58.515 22:17:22 -- pm/common@17 -- $ local monitor 00:00:58.515 22:17:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:58.515 22:17:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:58.515 22:17:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:58.515 22:17:22 -- pm/common@21 -- $ date +%s 00:00:58.515 22:17:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:58.515 22:17:22 -- pm/common@21 -- $ date +%s 00:00:58.515 22:17:22 -- pm/common@25 -- $ sleep 1 00:00:58.515 22:17:22 -- pm/common@21 -- $ date +%s 00:00:58.515 22:17:22 -- pm/common@21 -- $ date +%s 00:00:58.515 22:17:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721074642 00:00:58.515 22:17:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721074642 00:00:58.515 22:17:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721074642 00:00:58.515 22:17:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721074642 00:00:58.515 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721074642_collect-vmstat.pm.log 00:00:58.515 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721074642_collect-cpu-load.pm.log 00:00:58.516 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721074642_collect-cpu-temp.pm.log 00:00:58.516 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721074642_collect-bmc-pm.bmc.pm.log 00:00:59.451 22:17:23 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:00:59.451 22:17:23 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:59.451 22:17:23 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:59.451 22:17:23 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:59.451 22:17:23 -- spdk/autobuild.sh@16 -- $ date -u 00:00:59.451 Mon Jul 15 08:17:23 PM UTC 2024 00:00:59.451 22:17:23 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:59.451 v24.09-pre-214-gf8598a71f 00:00:59.451 22:17:23 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:59.451 22:17:23 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:59.451 22:17:23 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:59.451 22:17:23 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:59.451 22:17:23 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:59.451 22:17:23 -- common/autotest_common.sh@10 -- $ set +x 00:00:59.451 ************************************ 00:00:59.451 START TEST ubsan 00:00:59.451 ************************************ 00:00:59.451 22:17:23 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:59.451 using ubsan 00:00:59.451 00:00:59.451 real 0m0.000s 00:00:59.451 user 0m0.000s 00:00:59.451 sys 0m0.000s 00:00:59.451 22:17:23 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:59.451 22:17:23 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:59.451 ************************************ 00:00:59.451 END TEST ubsan 00:00:59.451 ************************************ 00:00:59.451 22:17:23 -- common/autotest_common.sh@1142 -- $ return 0 00:00:59.451 22:17:23 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:59.451 22:17:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:59.451 22:17:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:59.451 22:17:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:59.451 22:17:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:59.451 22:17:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:59.451 22:17:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:59.451 22:17:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:59.451 22:17:23 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:59.710 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:59.710 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:59.968 Using 'verbs' RDMA provider 00:01:12.740 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:22.722 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:23.290 Creating mk/config.mk...done. 00:01:23.290 Creating mk/cc.flags.mk...done. 00:01:23.290 Type 'make' to build. 00:01:23.290 22:17:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:23.290 22:17:46 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:23.290 22:17:46 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:23.290 22:17:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.290 ************************************ 00:01:23.290 START TEST make 00:01:23.290 ************************************ 00:01:23.290 22:17:46 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:23.547 make[1]: Nothing to be done for 'all'. 00:01:24.931 The Meson build system 00:01:24.931 Version: 1.3.1 00:01:24.931 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:24.931 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.931 Build type: native build 00:01:24.931 Project name: libvfio-user 00:01:24.931 Project version: 0.0.1 00:01:24.931 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:24.931 C linker for the host machine: cc ld.bfd 2.39-16 00:01:24.931 Host machine cpu family: x86_64 00:01:24.931 Host machine cpu: x86_64 00:01:24.931 Run-time dependency threads found: YES 00:01:24.931 Library dl found: YES 00:01:24.931 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:24.931 Run-time dependency json-c found: YES 0.17 00:01:24.931 Run-time dependency cmocka found: YES 1.1.7 00:01:24.931 Program pytest-3 found: NO 00:01:24.931 Program flake8 found: NO 00:01:24.931 Program misspell-fixer found: NO 00:01:24.931 Program restructuredtext-lint found: NO 00:01:24.931 Program valgrind found: YES (/usr/bin/valgrind) 00:01:24.931 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:24.931 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:24.931 Compiler for C supports arguments -Wwrite-strings: YES 00:01:24.931 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.931 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:24.931 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:24.931 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.931 Build targets in project: 8 00:01:24.931 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:24.931 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:24.931 00:01:24.931 libvfio-user 0.0.1 00:01:24.931 00:01:24.931 User defined options 00:01:24.931 buildtype : debug 00:01:24.931 default_library: shared 00:01:24.931 libdir : /usr/local/lib 00:01:24.931 00:01:24.931 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:25.188 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:25.188 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:25.188 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:25.188 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:25.188 [4/37] Compiling C object samples/null.p/null.c.o 00:01:25.188 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:25.188 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:25.188 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:25.188 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:25.188 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:25.188 [10/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:25.188 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:25.188 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:25.189 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:25.189 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:25.189 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:25.189 [16/37] Compiling C object samples/server.p/server.c.o 00:01:25.189 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:25.189 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:25.445 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:25.446 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:25.446 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:25.446 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:25.446 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:25.446 [24/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:25.446 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:25.446 [26/37] Compiling C object samples/client.p/client.c.o 00:01:25.446 [27/37] Linking target samples/client 00:01:25.446 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:25.446 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:25.446 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:25.446 [31/37] Linking target test/unit_tests 00:01:25.702 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:25.702 [33/37] Linking target samples/gpio-pci-idio-16 00:01:25.702 [34/37] Linking target samples/null 00:01:25.702 [35/37] Linking target samples/lspci 00:01:25.702 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:25.702 [37/37] Linking target samples/server 00:01:25.702 INFO: autodetecting backend as ninja 00:01:25.702 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.702 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.960 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:25.960 ninja: no work to do. 00:01:31.306 The Meson build system 00:01:31.306 Version: 1.3.1 00:01:31.306 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:31.306 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:31.306 Build type: native build 00:01:31.306 Program cat found: YES (/usr/bin/cat) 00:01:31.306 Project name: DPDK 00:01:31.306 Project version: 24.03.0 00:01:31.306 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:31.306 C linker for the host machine: cc ld.bfd 2.39-16 00:01:31.306 Host machine cpu family: x86_64 00:01:31.306 Host machine cpu: x86_64 00:01:31.306 Message: ## Building in Developer Mode ## 00:01:31.306 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:31.306 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:31.306 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:31.306 Program python3 found: YES (/usr/bin/python3) 00:01:31.306 Program cat found: YES (/usr/bin/cat) 00:01:31.306 Compiler for C supports arguments -march=native: YES 00:01:31.306 Checking for size of "void *" : 8 00:01:31.306 Checking for size of "void *" : 8 (cached) 00:01:31.306 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:31.306 Library m found: YES 00:01:31.306 Library numa found: YES 00:01:31.306 Has header "numaif.h" : YES 00:01:31.306 Library fdt found: NO 00:01:31.306 Library execinfo found: NO 00:01:31.306 Has header "execinfo.h" : YES 00:01:31.306 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:31.306 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:31.306 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:31.306 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:31.306 Run-time dependency openssl found: YES 3.0.9 00:01:31.306 Run-time dependency libpcap found: YES 1.10.4 00:01:31.306 Has header "pcap.h" with dependency libpcap: YES 00:01:31.306 Compiler for C supports arguments -Wcast-qual: YES 00:01:31.306 Compiler for C supports arguments -Wdeprecated: YES 00:01:31.306 Compiler for C supports arguments -Wformat: YES 00:01:31.306 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:31.306 Compiler for C supports arguments -Wformat-security: NO 00:01:31.306 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:31.306 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:31.306 Compiler for C supports arguments -Wnested-externs: YES 00:01:31.306 Compiler for C supports arguments -Wold-style-definition: YES 00:01:31.306 Compiler for C supports arguments -Wpointer-arith: YES 00:01:31.306 Compiler for C supports arguments -Wsign-compare: YES 00:01:31.306 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:31.306 Compiler for C supports arguments -Wundef: YES 00:01:31.306 Compiler for C supports arguments -Wwrite-strings: YES 00:01:31.306 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:31.306 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:31.306 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:31.306 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:31.306 Program objdump found: YES (/usr/bin/objdump) 00:01:31.306 Compiler for C supports arguments -mavx512f: YES 00:01:31.306 Checking if "AVX512 checking" compiles: YES 00:01:31.306 Fetching value of define "__SSE4_2__" : 1 00:01:31.306 Fetching value of define "__AES__" : 1 00:01:31.306 Fetching value of define "__AVX__" : 1 00:01:31.306 Fetching value of define "__AVX2__" : 1 00:01:31.306 Fetching value of define "__AVX512BW__" : 1 00:01:31.306 Fetching value of define "__AVX512CD__" : 1 00:01:31.306 Fetching value of define "__AVX512DQ__" : 1 00:01:31.306 Fetching value of define "__AVX512F__" : 1 00:01:31.306 Fetching value of define "__AVX512VL__" : 1 00:01:31.306 Fetching value of define "__PCLMUL__" : 1 00:01:31.306 Fetching value of define "__RDRND__" : 1 00:01:31.306 Fetching value of define "__RDSEED__" : 1 00:01:31.306 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:31.306 Fetching value of define "__znver1__" : (undefined) 00:01:31.306 Fetching value of define "__znver2__" : (undefined) 00:01:31.306 Fetching value of define "__znver3__" : (undefined) 00:01:31.306 Fetching value of define "__znver4__" : (undefined) 00:01:31.306 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:31.306 Message: lib/log: Defining dependency "log" 00:01:31.306 Message: lib/kvargs: Defining dependency "kvargs" 00:01:31.306 Message: lib/telemetry: Defining dependency "telemetry" 00:01:31.306 Checking for function "getentropy" : NO 00:01:31.306 Message: lib/eal: Defining dependency "eal" 00:01:31.306 Message: lib/ring: Defining dependency "ring" 00:01:31.306 Message: lib/rcu: Defining dependency "rcu" 00:01:31.306 Message: lib/mempool: Defining dependency "mempool" 00:01:31.306 Message: lib/mbuf: Defining dependency "mbuf" 00:01:31.306 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:31.306 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:31.306 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:31.306 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:31.306 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:31.306 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:31.306 Compiler for C supports arguments -mpclmul: YES 00:01:31.306 Compiler for C supports arguments -maes: YES 00:01:31.306 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:31.306 Compiler for C supports arguments -mavx512bw: YES 00:01:31.306 Compiler for C supports arguments -mavx512dq: YES 00:01:31.306 Compiler for C supports arguments -mavx512vl: YES 00:01:31.306 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:31.306 Compiler for C supports arguments -mavx2: YES 00:01:31.306 Compiler for C supports arguments -mavx: YES 00:01:31.306 Message: lib/net: Defining dependency "net" 00:01:31.306 Message: lib/meter: Defining dependency "meter" 00:01:31.306 Message: lib/ethdev: Defining dependency "ethdev" 00:01:31.306 Message: lib/pci: Defining dependency "pci" 00:01:31.306 Message: lib/cmdline: Defining dependency "cmdline" 00:01:31.306 Message: lib/hash: Defining dependency "hash" 00:01:31.306 Message: lib/timer: Defining dependency "timer" 00:01:31.306 Message: lib/compressdev: Defining dependency "compressdev" 00:01:31.306 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:31.306 Message: lib/dmadev: Defining dependency "dmadev" 00:01:31.306 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:31.306 Message: lib/power: Defining dependency "power" 00:01:31.306 Message: lib/reorder: Defining dependency "reorder" 00:01:31.306 Message: lib/security: Defining dependency "security" 00:01:31.306 Has header "linux/userfaultfd.h" : YES 00:01:31.306 Has header "linux/vduse.h" : YES 00:01:31.306 Message: lib/vhost: Defining dependency "vhost" 00:01:31.306 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:31.306 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:31.306 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:31.306 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:31.306 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:31.306 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:31.306 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:31.306 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:31.306 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:31.306 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:31.306 Program doxygen found: YES (/usr/bin/doxygen) 00:01:31.306 Configuring doxy-api-html.conf using configuration 00:01:31.306 Configuring doxy-api-man.conf using configuration 00:01:31.306 Program mandb found: YES (/usr/bin/mandb) 00:01:31.306 Program sphinx-build found: NO 00:01:31.306 Configuring rte_build_config.h using configuration 00:01:31.306 Message: 00:01:31.306 ================= 00:01:31.306 Applications Enabled 00:01:31.306 ================= 00:01:31.306 00:01:31.306 apps: 00:01:31.306 00:01:31.306 00:01:31.306 Message: 00:01:31.306 ================= 00:01:31.306 Libraries Enabled 00:01:31.306 ================= 00:01:31.306 00:01:31.306 libs: 00:01:31.306 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:31.306 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:31.306 cryptodev, dmadev, power, reorder, security, vhost, 00:01:31.306 00:01:31.306 Message: 00:01:31.306 =============== 00:01:31.306 Drivers Enabled 00:01:31.306 =============== 00:01:31.306 00:01:31.306 common: 00:01:31.306 00:01:31.306 bus: 00:01:31.306 pci, vdev, 00:01:31.306 mempool: 00:01:31.306 ring, 00:01:31.306 dma: 00:01:31.307 00:01:31.307 net: 00:01:31.307 00:01:31.307 crypto: 00:01:31.307 00:01:31.307 compress: 00:01:31.307 00:01:31.307 vdpa: 00:01:31.307 00:01:31.307 00:01:31.307 Message: 00:01:31.307 ================= 00:01:31.307 Content Skipped 00:01:31.307 ================= 00:01:31.307 00:01:31.307 apps: 00:01:31.307 dumpcap: explicitly disabled via build config 00:01:31.307 graph: explicitly disabled via build config 00:01:31.307 pdump: explicitly disabled via build config 00:01:31.307 proc-info: explicitly disabled via build config 00:01:31.307 test-acl: explicitly disabled via build config 00:01:31.307 test-bbdev: explicitly disabled via build config 00:01:31.307 test-cmdline: explicitly disabled via build config 00:01:31.307 test-compress-perf: explicitly disabled via build config 00:01:31.307 test-crypto-perf: explicitly disabled via build config 00:01:31.307 test-dma-perf: explicitly disabled via build config 00:01:31.307 test-eventdev: explicitly disabled via build config 00:01:31.307 test-fib: explicitly disabled via build config 00:01:31.307 test-flow-perf: explicitly disabled via build config 00:01:31.307 test-gpudev: explicitly disabled via build config 00:01:31.307 test-mldev: explicitly disabled via build config 00:01:31.307 test-pipeline: explicitly disabled via build config 00:01:31.307 test-pmd: explicitly disabled via build config 00:01:31.307 test-regex: explicitly disabled via build config 00:01:31.307 test-sad: explicitly disabled via build config 00:01:31.307 test-security-perf: explicitly disabled via build config 00:01:31.307 00:01:31.307 libs: 00:01:31.307 argparse: explicitly disabled via build config 00:01:31.307 metrics: explicitly disabled via build config 00:01:31.307 acl: explicitly disabled via build config 00:01:31.307 bbdev: explicitly disabled via build config 00:01:31.307 bitratestats: explicitly disabled via build config 00:01:31.307 bpf: explicitly disabled via build config 00:01:31.307 cfgfile: explicitly disabled via build config 00:01:31.307 distributor: explicitly disabled via build config 00:01:31.307 efd: explicitly disabled via build config 00:01:31.307 eventdev: explicitly disabled via build config 00:01:31.307 dispatcher: explicitly disabled via build config 00:01:31.307 gpudev: explicitly disabled via build config 00:01:31.307 gro: explicitly disabled via build config 00:01:31.307 gso: explicitly disabled via build config 00:01:31.307 ip_frag: explicitly disabled via build config 00:01:31.307 jobstats: explicitly disabled via build config 00:01:31.307 latencystats: explicitly disabled via build config 00:01:31.307 lpm: explicitly disabled via build config 00:01:31.307 member: explicitly disabled via build config 00:01:31.307 pcapng: explicitly disabled via build config 00:01:31.307 rawdev: explicitly disabled via build config 00:01:31.307 regexdev: explicitly disabled via build config 00:01:31.307 mldev: explicitly disabled via build config 00:01:31.307 rib: explicitly disabled via build config 00:01:31.307 sched: explicitly disabled via build config 00:01:31.307 stack: explicitly disabled via build config 00:01:31.307 ipsec: explicitly disabled via build config 00:01:31.307 pdcp: explicitly disabled via build config 00:01:31.307 fib: explicitly disabled via build config 00:01:31.307 port: explicitly disabled via build config 00:01:31.307 pdump: explicitly disabled via build config 00:01:31.307 table: explicitly disabled via build config 00:01:31.307 pipeline: explicitly disabled via build config 00:01:31.307 graph: explicitly disabled via build config 00:01:31.307 node: explicitly disabled via build config 00:01:31.307 00:01:31.307 drivers: 00:01:31.307 common/cpt: not in enabled drivers build config 00:01:31.307 common/dpaax: not in enabled drivers build config 00:01:31.307 common/iavf: not in enabled drivers build config 00:01:31.307 common/idpf: not in enabled drivers build config 00:01:31.307 common/ionic: not in enabled drivers build config 00:01:31.307 common/mvep: not in enabled drivers build config 00:01:31.307 common/octeontx: not in enabled drivers build config 00:01:31.307 bus/auxiliary: not in enabled drivers build config 00:01:31.307 bus/cdx: not in enabled drivers build config 00:01:31.307 bus/dpaa: not in enabled drivers build config 00:01:31.307 bus/fslmc: not in enabled drivers build config 00:01:31.307 bus/ifpga: not in enabled drivers build config 00:01:31.307 bus/platform: not in enabled drivers build config 00:01:31.307 bus/uacce: not in enabled drivers build config 00:01:31.307 bus/vmbus: not in enabled drivers build config 00:01:31.307 common/cnxk: not in enabled drivers build config 00:01:31.307 common/mlx5: not in enabled drivers build config 00:01:31.307 common/nfp: not in enabled drivers build config 00:01:31.307 common/nitrox: not in enabled drivers build config 00:01:31.307 common/qat: not in enabled drivers build config 00:01:31.307 common/sfc_efx: not in enabled drivers build config 00:01:31.307 mempool/bucket: not in enabled drivers build config 00:01:31.307 mempool/cnxk: not in enabled drivers build config 00:01:31.307 mempool/dpaa: not in enabled drivers build config 00:01:31.307 mempool/dpaa2: not in enabled drivers build config 00:01:31.307 mempool/octeontx: not in enabled drivers build config 00:01:31.307 mempool/stack: not in enabled drivers build config 00:01:31.307 dma/cnxk: not in enabled drivers build config 00:01:31.307 dma/dpaa: not in enabled drivers build config 00:01:31.307 dma/dpaa2: not in enabled drivers build config 00:01:31.307 dma/hisilicon: not in enabled drivers build config 00:01:31.307 dma/idxd: not in enabled drivers build config 00:01:31.307 dma/ioat: not in enabled drivers build config 00:01:31.307 dma/skeleton: not in enabled drivers build config 00:01:31.307 net/af_packet: not in enabled drivers build config 00:01:31.307 net/af_xdp: not in enabled drivers build config 00:01:31.307 net/ark: not in enabled drivers build config 00:01:31.307 net/atlantic: not in enabled drivers build config 00:01:31.307 net/avp: not in enabled drivers build config 00:01:31.307 net/axgbe: not in enabled drivers build config 00:01:31.307 net/bnx2x: not in enabled drivers build config 00:01:31.307 net/bnxt: not in enabled drivers build config 00:01:31.307 net/bonding: not in enabled drivers build config 00:01:31.307 net/cnxk: not in enabled drivers build config 00:01:31.307 net/cpfl: not in enabled drivers build config 00:01:31.307 net/cxgbe: not in enabled drivers build config 00:01:31.307 net/dpaa: not in enabled drivers build config 00:01:31.307 net/dpaa2: not in enabled drivers build config 00:01:31.307 net/e1000: not in enabled drivers build config 00:01:31.307 net/ena: not in enabled drivers build config 00:01:31.307 net/enetc: not in enabled drivers build config 00:01:31.307 net/enetfec: not in enabled drivers build config 00:01:31.307 net/enic: not in enabled drivers build config 00:01:31.307 net/failsafe: not in enabled drivers build config 00:01:31.307 net/fm10k: not in enabled drivers build config 00:01:31.307 net/gve: not in enabled drivers build config 00:01:31.307 net/hinic: not in enabled drivers build config 00:01:31.307 net/hns3: not in enabled drivers build config 00:01:31.307 net/i40e: not in enabled drivers build config 00:01:31.307 net/iavf: not in enabled drivers build config 00:01:31.307 net/ice: not in enabled drivers build config 00:01:31.307 net/idpf: not in enabled drivers build config 00:01:31.307 net/igc: not in enabled drivers build config 00:01:31.307 net/ionic: not in enabled drivers build config 00:01:31.307 net/ipn3ke: not in enabled drivers build config 00:01:31.307 net/ixgbe: not in enabled drivers build config 00:01:31.307 net/mana: not in enabled drivers build config 00:01:31.307 net/memif: not in enabled drivers build config 00:01:31.307 net/mlx4: not in enabled drivers build config 00:01:31.307 net/mlx5: not in enabled drivers build config 00:01:31.307 net/mvneta: not in enabled drivers build config 00:01:31.307 net/mvpp2: not in enabled drivers build config 00:01:31.307 net/netvsc: not in enabled drivers build config 00:01:31.307 net/nfb: not in enabled drivers build config 00:01:31.307 net/nfp: not in enabled drivers build config 00:01:31.307 net/ngbe: not in enabled drivers build config 00:01:31.307 net/null: not in enabled drivers build config 00:01:31.307 net/octeontx: not in enabled drivers build config 00:01:31.307 net/octeon_ep: not in enabled drivers build config 00:01:31.307 net/pcap: not in enabled drivers build config 00:01:31.307 net/pfe: not in enabled drivers build config 00:01:31.307 net/qede: not in enabled drivers build config 00:01:31.307 net/ring: not in enabled drivers build config 00:01:31.307 net/sfc: not in enabled drivers build config 00:01:31.307 net/softnic: not in enabled drivers build config 00:01:31.307 net/tap: not in enabled drivers build config 00:01:31.307 net/thunderx: not in enabled drivers build config 00:01:31.307 net/txgbe: not in enabled drivers build config 00:01:31.307 net/vdev_netvsc: not in enabled drivers build config 00:01:31.307 net/vhost: not in enabled drivers build config 00:01:31.307 net/virtio: not in enabled drivers build config 00:01:31.307 net/vmxnet3: not in enabled drivers build config 00:01:31.307 raw/*: missing internal dependency, "rawdev" 00:01:31.307 crypto/armv8: not in enabled drivers build config 00:01:31.307 crypto/bcmfs: not in enabled drivers build config 00:01:31.307 crypto/caam_jr: not in enabled drivers build config 00:01:31.307 crypto/ccp: not in enabled drivers build config 00:01:31.307 crypto/cnxk: not in enabled drivers build config 00:01:31.307 crypto/dpaa_sec: not in enabled drivers build config 00:01:31.307 crypto/dpaa2_sec: not in enabled drivers build config 00:01:31.307 crypto/ipsec_mb: not in enabled drivers build config 00:01:31.307 crypto/mlx5: not in enabled drivers build config 00:01:31.307 crypto/mvsam: not in enabled drivers build config 00:01:31.307 crypto/nitrox: not in enabled drivers build config 00:01:31.307 crypto/null: not in enabled drivers build config 00:01:31.307 crypto/octeontx: not in enabled drivers build config 00:01:31.307 crypto/openssl: not in enabled drivers build config 00:01:31.307 crypto/scheduler: not in enabled drivers build config 00:01:31.307 crypto/uadk: not in enabled drivers build config 00:01:31.307 crypto/virtio: not in enabled drivers build config 00:01:31.307 compress/isal: not in enabled drivers build config 00:01:31.307 compress/mlx5: not in enabled drivers build config 00:01:31.307 compress/nitrox: not in enabled drivers build config 00:01:31.307 compress/octeontx: not in enabled drivers build config 00:01:31.307 compress/zlib: not in enabled drivers build config 00:01:31.307 regex/*: missing internal dependency, "regexdev" 00:01:31.307 ml/*: missing internal dependency, "mldev" 00:01:31.307 vdpa/ifc: not in enabled drivers build config 00:01:31.307 vdpa/mlx5: not in enabled drivers build config 00:01:31.307 vdpa/nfp: not in enabled drivers build config 00:01:31.307 vdpa/sfc: not in enabled drivers build config 00:01:31.307 event/*: missing internal dependency, "eventdev" 00:01:31.307 baseband/*: missing internal dependency, "bbdev" 00:01:31.307 gpu/*: missing internal dependency, "gpudev" 00:01:31.307 00:01:31.307 00:01:31.307 Build targets in project: 85 00:01:31.307 00:01:31.307 DPDK 24.03.0 00:01:31.307 00:01:31.307 User defined options 00:01:31.307 buildtype : debug 00:01:31.308 default_library : shared 00:01:31.308 libdir : lib 00:01:31.308 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:31.308 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:31.308 c_link_args : 00:01:31.308 cpu_instruction_set: native 00:01:31.308 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:31.308 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:31.308 enable_docs : false 00:01:31.308 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:31.308 enable_kmods : false 00:01:31.308 max_lcores : 128 00:01:31.308 tests : false 00:01:31.308 00:01:31.308 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.580 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:31.580 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:31.580 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:31.580 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:31.580 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:31.580 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:31.580 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:31.580 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:31.839 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:31.839 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:31.839 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:31.839 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:31.839 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:31.839 [13/268] Linking static target lib/librte_kvargs.a 00:01:31.839 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:31.839 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:31.839 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:31.839 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:31.839 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:31.839 [19/268] Linking static target lib/librte_log.a 00:01:31.839 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:31.839 [21/268] Linking static target lib/librte_pci.a 00:01:31.839 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:31.839 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:31.839 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:32.103 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:32.103 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:32.103 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:32.103 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:32.103 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:32.103 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:32.103 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:32.103 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:32.103 [33/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:32.103 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:32.103 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:32.103 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:32.103 [37/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:32.103 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:32.103 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:32.103 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:32.103 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:32.103 [42/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:32.103 [43/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:32.103 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:32.103 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:32.103 [46/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:32.103 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:32.103 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:32.103 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:32.103 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:32.103 [51/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:32.103 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:32.103 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:32.103 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:32.103 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:32.103 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:32.103 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:32.103 [58/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.103 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:32.103 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:32.103 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:32.103 [62/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:32.103 [63/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:32.103 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:32.103 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:32.103 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:32.103 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:32.103 [68/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:32.103 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:32.103 [70/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:32.362 [71/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.362 [72/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:32.362 [73/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:32.362 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:32.362 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:32.362 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:32.362 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:32.362 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:32.362 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:32.362 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:32.362 [81/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:32.362 [82/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:32.362 [83/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:32.362 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:32.362 [85/268] Linking static target lib/librte_meter.a 00:01:32.362 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:32.362 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:32.362 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:32.362 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:32.362 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:32.362 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:32.362 [92/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:32.362 [93/268] Linking static target lib/librte_telemetry.a 00:01:32.362 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:32.362 [95/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:32.362 [96/268] Linking static target lib/librte_ring.a 00:01:32.362 [97/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:32.362 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:32.362 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:32.362 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:32.362 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:32.362 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:32.362 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:32.362 [104/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:32.362 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:32.362 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:32.362 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:32.362 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:32.362 [109/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:32.362 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:32.362 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.362 [112/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:32.362 [113/268] Linking static target lib/librte_net.a 00:01:32.362 [114/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:32.362 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:32.362 [116/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:32.362 [117/268] Linking static target lib/librte_rcu.a 00:01:32.362 [118/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:32.362 [119/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:32.362 [120/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:32.362 [121/268] Linking static target lib/librte_mempool.a 00:01:32.362 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:32.362 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:32.362 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:32.362 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:32.362 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:32.362 [127/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:32.362 [128/268] Linking static target lib/librte_eal.a 00:01:32.362 [129/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.362 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:32.362 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:32.362 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:32.362 [133/268] Linking static target lib/librte_cmdline.a 00:01:32.362 [134/268] Linking target lib/librte_log.so.24.1 00:01:32.621 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:32.621 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:32.621 [137/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.621 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.621 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:32.621 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:32.621 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:32.621 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:32.621 [143/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:32.621 [144/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.621 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.621 [146/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:32.621 [147/268] Linking target lib/librte_kvargs.so.24.1 00:01:32.621 [148/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:32.621 [149/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:32.621 [150/268] Linking static target lib/librte_mbuf.a 00:01:32.621 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:32.621 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:32.621 [153/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:32.621 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:32.621 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:32.621 [156/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:32.621 [157/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:32.621 [158/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.621 [159/268] Linking static target lib/librte_timer.a 00:01:32.621 [160/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:32.621 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:32.621 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:32.621 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:32.621 [164/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:32.621 [165/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:32.621 [166/268] Linking target lib/librte_telemetry.so.24.1 00:01:32.621 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:32.621 [168/268] Linking static target lib/librte_reorder.a 00:01:32.621 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:32.621 [170/268] Linking static target lib/librte_dmadev.a 00:01:32.621 [171/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:32.621 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:32.621 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:32.621 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:32.621 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:32.880 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:32.880 [177/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:32.880 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:32.880 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:32.880 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:32.880 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:32.880 [182/268] Linking static target lib/librte_compressdev.a 00:01:32.880 [183/268] Linking static target lib/librte_power.a 00:01:32.880 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:32.880 [185/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:32.880 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:32.880 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:32.880 [188/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:32.880 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:32.880 [190/268] Linking static target lib/librte_security.a 00:01:32.880 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:32.880 [192/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:32.880 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:32.880 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:32.880 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:32.880 [196/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.880 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.880 [198/268] Linking static target drivers/librte_bus_vdev.a 00:01:32.880 [199/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:32.880 [200/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:33.139 [201/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.139 [202/268] Linking static target lib/librte_hash.a 00:01:33.140 [203/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:33.140 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:33.140 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:33.140 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:33.140 [207/268] Linking static target drivers/librte_bus_pci.a 00:01:33.140 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:33.140 [209/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.140 [210/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.140 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.140 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.140 [213/268] Linking static target drivers/librte_mempool_ring.a 00:01:33.140 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:33.140 [215/268] Linking static target lib/librte_cryptodev.a 00:01:33.398 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:33.398 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.398 [218/268] Linking static target lib/librte_ethdev.a 00:01:33.398 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.398 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.398 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.398 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.398 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.657 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:33.657 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.916 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.916 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.851 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:34.851 [229/268] Linking static target lib/librte_vhost.a 00:01:35.110 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.486 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.756 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.015 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.015 [234/268] Linking target lib/librte_eal.so.24.1 00:01:42.274 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:42.274 [236/268] Linking target lib/librte_timer.so.24.1 00:01:42.274 [237/268] Linking target lib/librte_ring.so.24.1 00:01:42.274 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:42.274 [239/268] Linking target lib/librte_meter.so.24.1 00:01:42.274 [240/268] Linking target lib/librte_pci.so.24.1 00:01:42.274 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:42.532 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:42.532 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:42.532 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:42.532 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:42.532 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:42.532 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:42.532 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:42.532 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:42.532 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:42.532 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:42.532 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:42.532 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:42.789 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:42.789 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:42.789 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:42.789 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:01:42.789 [258/268] Linking target lib/librte_net.so.24.1 00:01:43.048 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:43.048 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:43.048 [261/268] Linking target lib/librte_security.so.24.1 00:01:43.048 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:43.048 [263/268] Linking target lib/librte_hash.so.24.1 00:01:43.048 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:43.048 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:43.048 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:43.306 [267/268] Linking target lib/librte_power.so.24.1 00:01:43.306 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:43.306 INFO: autodetecting backend as ninja 00:01:43.306 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:44.240 CC lib/ut/ut.o 00:01:44.240 CC lib/log/log.o 00:01:44.240 CC lib/log/log_flags.o 00:01:44.240 CC lib/log/log_deprecated.o 00:01:44.240 CC lib/ut_mock/mock.o 00:01:44.240 LIB libspdk_ut.a 00:01:44.240 LIB libspdk_ut_mock.a 00:01:44.240 LIB libspdk_log.a 00:01:44.240 SO libspdk_ut.so.2.0 00:01:44.240 SO libspdk_log.so.7.0 00:01:44.498 SO libspdk_ut_mock.so.6.0 00:01:44.498 SYMLINK libspdk_ut.so 00:01:44.498 SYMLINK libspdk_log.so 00:01:44.498 SYMLINK libspdk_ut_mock.so 00:01:44.755 CC lib/dma/dma.o 00:01:44.755 CC lib/ioat/ioat.o 00:01:44.755 CXX lib/trace_parser/trace.o 00:01:44.755 CC lib/util/base64.o 00:01:44.755 CC lib/util/bit_array.o 00:01:44.755 CC lib/util/crc16.o 00:01:44.755 CC lib/util/crc32.o 00:01:44.755 CC lib/util/cpuset.o 00:01:44.755 CC lib/util/crc32_ieee.o 00:01:44.755 CC lib/util/crc64.o 00:01:44.755 CC lib/util/crc32c.o 00:01:44.755 CC lib/util/dif.o 00:01:44.755 CC lib/util/fd.o 00:01:44.755 CC lib/util/file.o 00:01:44.755 CC lib/util/hexlify.o 00:01:44.755 CC lib/util/iov.o 00:01:44.755 CC lib/util/math.o 00:01:44.755 CC lib/util/pipe.o 00:01:44.755 CC lib/util/strerror_tls.o 00:01:44.755 CC lib/util/string.o 00:01:44.755 CC lib/util/uuid.o 00:01:44.755 CC lib/util/fd_group.o 00:01:44.755 CC lib/util/xor.o 00:01:44.755 CC lib/util/zipf.o 00:01:44.755 CC lib/vfio_user/host/vfio_user_pci.o 00:01:44.755 CC lib/vfio_user/host/vfio_user.o 00:01:45.013 LIB libspdk_dma.a 00:01:45.013 SO libspdk_dma.so.4.0 00:01:45.013 LIB libspdk_ioat.a 00:01:45.013 SO libspdk_ioat.so.7.0 00:01:45.013 SYMLINK libspdk_dma.so 00:01:45.013 SYMLINK libspdk_ioat.so 00:01:45.013 LIB libspdk_vfio_user.a 00:01:45.013 SO libspdk_vfio_user.so.5.0 00:01:45.269 LIB libspdk_util.a 00:01:45.269 SYMLINK libspdk_vfio_user.so 00:01:45.269 SO libspdk_util.so.9.1 00:01:45.269 SYMLINK libspdk_util.so 00:01:45.527 LIB libspdk_trace_parser.a 00:01:45.527 SO libspdk_trace_parser.so.5.0 00:01:45.527 SYMLINK libspdk_trace_parser.so 00:01:45.527 CC lib/rdma_provider/common.o 00:01:45.527 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:45.527 CC lib/env_dpdk/env.o 00:01:45.527 CC lib/conf/conf.o 00:01:45.527 CC lib/env_dpdk/memory.o 00:01:45.527 CC lib/idxd/idxd.o 00:01:45.527 CC lib/env_dpdk/pci.o 00:01:45.527 CC lib/idxd/idxd_user.o 00:01:45.527 CC lib/idxd/idxd_kernel.o 00:01:45.527 CC lib/env_dpdk/init.o 00:01:45.527 CC lib/rdma_utils/rdma_utils.o 00:01:45.527 CC lib/env_dpdk/threads.o 00:01:45.527 CC lib/env_dpdk/pci_ioat.o 00:01:45.527 CC lib/env_dpdk/pci_virtio.o 00:01:45.527 CC lib/env_dpdk/pci_vmd.o 00:01:45.527 CC lib/env_dpdk/pci_idxd.o 00:01:45.527 CC lib/env_dpdk/pci_event.o 00:01:45.527 CC lib/env_dpdk/sigbus_handler.o 00:01:45.527 CC lib/env_dpdk/pci_dpdk.o 00:01:45.527 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:45.527 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:45.527 CC lib/json/json_parse.o 00:01:45.527 CC lib/json/json_util.o 00:01:45.527 CC lib/vmd/vmd.o 00:01:45.786 CC lib/json/json_write.o 00:01:45.786 CC lib/vmd/led.o 00:01:45.786 LIB libspdk_rdma_provider.a 00:01:45.786 LIB libspdk_conf.a 00:01:45.786 SO libspdk_rdma_provider.so.6.0 00:01:45.786 SO libspdk_conf.so.6.0 00:01:46.045 LIB libspdk_rdma_utils.a 00:01:46.045 LIB libspdk_json.a 00:01:46.045 SYMLINK libspdk_rdma_provider.so 00:01:46.045 SYMLINK libspdk_conf.so 00:01:46.045 SO libspdk_rdma_utils.so.1.0 00:01:46.045 SO libspdk_json.so.6.0 00:01:46.045 SYMLINK libspdk_rdma_utils.so 00:01:46.045 SYMLINK libspdk_json.so 00:01:46.045 LIB libspdk_idxd.a 00:01:46.045 SO libspdk_idxd.so.12.0 00:01:46.304 LIB libspdk_vmd.a 00:01:46.304 SYMLINK libspdk_idxd.so 00:01:46.304 SO libspdk_vmd.so.6.0 00:01:46.304 SYMLINK libspdk_vmd.so 00:01:46.304 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:46.304 CC lib/jsonrpc/jsonrpc_server.o 00:01:46.304 CC lib/jsonrpc/jsonrpc_client.o 00:01:46.304 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:46.564 LIB libspdk_jsonrpc.a 00:01:46.564 SO libspdk_jsonrpc.so.6.0 00:01:46.564 SYMLINK libspdk_jsonrpc.so 00:01:46.564 LIB libspdk_env_dpdk.a 00:01:46.824 SO libspdk_env_dpdk.so.14.1 00:01:46.824 SYMLINK libspdk_env_dpdk.so 00:01:46.824 CC lib/rpc/rpc.o 00:01:47.083 LIB libspdk_rpc.a 00:01:47.083 SO libspdk_rpc.so.6.0 00:01:47.342 SYMLINK libspdk_rpc.so 00:01:47.644 CC lib/trace/trace.o 00:01:47.644 CC lib/trace/trace_rpc.o 00:01:47.644 CC lib/trace/trace_flags.o 00:01:47.644 CC lib/notify/notify_rpc.o 00:01:47.644 CC lib/notify/notify.o 00:01:47.644 CC lib/keyring/keyring.o 00:01:47.644 CC lib/keyring/keyring_rpc.o 00:01:47.644 LIB libspdk_notify.a 00:01:47.644 SO libspdk_notify.so.6.0 00:01:47.644 LIB libspdk_keyring.a 00:01:47.644 LIB libspdk_trace.a 00:01:47.931 SO libspdk_keyring.so.1.0 00:01:47.931 SO libspdk_trace.so.10.0 00:01:47.931 SYMLINK libspdk_notify.so 00:01:47.931 SYMLINK libspdk_trace.so 00:01:47.931 SYMLINK libspdk_keyring.so 00:01:48.190 CC lib/thread/thread.o 00:01:48.190 CC lib/thread/iobuf.o 00:01:48.190 CC lib/sock/sock.o 00:01:48.190 CC lib/sock/sock_rpc.o 00:01:48.449 LIB libspdk_sock.a 00:01:48.449 SO libspdk_sock.so.10.0 00:01:48.449 SYMLINK libspdk_sock.so 00:01:48.708 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:48.708 CC lib/nvme/nvme_ctrlr.o 00:01:48.708 CC lib/nvme/nvme_fabric.o 00:01:48.708 CC lib/nvme/nvme_ns_cmd.o 00:01:48.708 CC lib/nvme/nvme_ns.o 00:01:48.708 CC lib/nvme/nvme_pcie_common.o 00:01:48.708 CC lib/nvme/nvme_pcie.o 00:01:48.708 CC lib/nvme/nvme_qpair.o 00:01:48.708 CC lib/nvme/nvme.o 00:01:48.708 CC lib/nvme/nvme_quirks.o 00:01:48.708 CC lib/nvme/nvme_transport.o 00:01:48.708 CC lib/nvme/nvme_discovery.o 00:01:48.708 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:48.708 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:48.708 CC lib/nvme/nvme_tcp.o 00:01:48.708 CC lib/nvme/nvme_opal.o 00:01:48.708 CC lib/nvme/nvme_io_msg.o 00:01:48.708 CC lib/nvme/nvme_poll_group.o 00:01:48.708 CC lib/nvme/nvme_zns.o 00:01:48.708 CC lib/nvme/nvme_stubs.o 00:01:48.708 CC lib/nvme/nvme_auth.o 00:01:48.708 CC lib/nvme/nvme_cuse.o 00:01:48.708 CC lib/nvme/nvme_rdma.o 00:01:48.708 CC lib/nvme/nvme_vfio_user.o 00:01:48.968 LIB libspdk_thread.a 00:01:49.227 SO libspdk_thread.so.10.1 00:01:49.227 SYMLINK libspdk_thread.so 00:01:49.493 CC lib/vfu_tgt/tgt_endpoint.o 00:01:49.493 CC lib/vfu_tgt/tgt_rpc.o 00:01:49.493 CC lib/accel/accel.o 00:01:49.493 CC lib/accel/accel_rpc.o 00:01:49.493 CC lib/accel/accel_sw.o 00:01:49.493 CC lib/virtio/virtio_vhost_user.o 00:01:49.493 CC lib/virtio/virtio.o 00:01:49.493 CC lib/virtio/virtio_pci.o 00:01:49.493 CC lib/virtio/virtio_vfio_user.o 00:01:49.493 CC lib/blob/blobstore.o 00:01:49.493 CC lib/blob/request.o 00:01:49.493 CC lib/blob/zeroes.o 00:01:49.493 CC lib/blob/blob_bs_dev.o 00:01:49.493 CC lib/init/json_config.o 00:01:49.493 CC lib/init/subsystem.o 00:01:49.493 CC lib/init/rpc.o 00:01:49.493 CC lib/init/subsystem_rpc.o 00:01:49.758 LIB libspdk_init.a 00:01:49.758 LIB libspdk_vfu_tgt.a 00:01:49.758 SO libspdk_init.so.5.0 00:01:49.758 LIB libspdk_virtio.a 00:01:49.758 SO libspdk_vfu_tgt.so.3.0 00:01:49.758 SO libspdk_virtio.so.7.0 00:01:49.758 SYMLINK libspdk_init.so 00:01:49.758 SYMLINK libspdk_vfu_tgt.so 00:01:49.758 SYMLINK libspdk_virtio.so 00:01:50.017 CC lib/event/app.o 00:01:50.017 CC lib/event/reactor.o 00:01:50.017 CC lib/event/log_rpc.o 00:01:50.017 CC lib/event/app_rpc.o 00:01:50.017 CC lib/event/scheduler_static.o 00:01:50.276 LIB libspdk_accel.a 00:01:50.276 SO libspdk_accel.so.15.1 00:01:50.276 SYMLINK libspdk_accel.so 00:01:50.276 LIB libspdk_nvme.a 00:01:50.535 LIB libspdk_event.a 00:01:50.535 SO libspdk_nvme.so.13.1 00:01:50.535 SO libspdk_event.so.14.0 00:01:50.535 SYMLINK libspdk_event.so 00:01:50.535 CC lib/bdev/bdev_rpc.o 00:01:50.535 CC lib/bdev/bdev.o 00:01:50.535 CC lib/bdev/bdev_zone.o 00:01:50.535 CC lib/bdev/part.o 00:01:50.535 CC lib/bdev/scsi_nvme.o 00:01:50.794 SYMLINK libspdk_nvme.so 00:01:51.730 LIB libspdk_blob.a 00:01:51.730 SO libspdk_blob.so.11.0 00:01:51.730 SYMLINK libspdk_blob.so 00:01:51.987 CC lib/blobfs/blobfs.o 00:01:51.987 CC lib/blobfs/tree.o 00:01:51.987 CC lib/lvol/lvol.o 00:01:52.245 LIB libspdk_bdev.a 00:01:52.503 SO libspdk_bdev.so.15.1 00:01:52.503 SYMLINK libspdk_bdev.so 00:01:52.503 LIB libspdk_blobfs.a 00:01:52.503 SO libspdk_blobfs.so.10.0 00:01:52.762 LIB libspdk_lvol.a 00:01:52.762 SYMLINK libspdk_blobfs.so 00:01:52.762 SO libspdk_lvol.so.10.0 00:01:52.762 CC lib/ublk/ublk.o 00:01:52.762 CC lib/ublk/ublk_rpc.o 00:01:52.762 CC lib/scsi/lun.o 00:01:52.762 CC lib/scsi/dev.o 00:01:52.762 CC lib/scsi/port.o 00:01:52.762 CC lib/scsi/scsi.o 00:01:52.762 CC lib/scsi/scsi_bdev.o 00:01:52.762 CC lib/scsi/scsi_pr.o 00:01:52.762 CC lib/scsi/scsi_rpc.o 00:01:52.762 CC lib/scsi/task.o 00:01:52.762 CC lib/nbd/nbd_rpc.o 00:01:52.762 CC lib/nbd/nbd.o 00:01:52.762 SYMLINK libspdk_lvol.so 00:01:52.762 CC lib/nvmf/ctrlr.o 00:01:52.762 CC lib/nvmf/ctrlr_discovery.o 00:01:52.762 CC lib/nvmf/ctrlr_bdev.o 00:01:52.762 CC lib/nvmf/subsystem.o 00:01:52.762 CC lib/nvmf/nvmf.o 00:01:52.762 CC lib/nvmf/transport.o 00:01:52.762 CC lib/nvmf/nvmf_rpc.o 00:01:52.762 CC lib/nvmf/tcp.o 00:01:52.762 CC lib/nvmf/stubs.o 00:01:52.762 CC lib/nvmf/mdns_server.o 00:01:52.762 CC lib/nvmf/vfio_user.o 00:01:52.762 CC lib/nvmf/rdma.o 00:01:52.762 CC lib/nvmf/auth.o 00:01:52.762 CC lib/ftl/ftl_core.o 00:01:52.762 CC lib/ftl/ftl_init.o 00:01:52.762 CC lib/ftl/ftl_layout.o 00:01:52.762 CC lib/ftl/ftl_debug.o 00:01:52.762 CC lib/ftl/ftl_io.o 00:01:52.762 CC lib/ftl/ftl_sb.o 00:01:52.762 CC lib/ftl/ftl_l2p.o 00:01:52.762 CC lib/ftl/ftl_l2p_flat.o 00:01:52.762 CC lib/ftl/ftl_nv_cache.o 00:01:52.762 CC lib/ftl/ftl_band.o 00:01:52.762 CC lib/ftl/ftl_band_ops.o 00:01:52.762 CC lib/ftl/ftl_writer.o 00:01:52.762 CC lib/ftl/ftl_rq.o 00:01:52.762 CC lib/ftl/ftl_reloc.o 00:01:52.762 CC lib/ftl/ftl_l2p_cache.o 00:01:52.762 CC lib/ftl/ftl_p2l.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:52.762 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:52.762 CC lib/ftl/utils/ftl_md.o 00:01:52.762 CC lib/ftl/utils/ftl_conf.o 00:01:52.762 CC lib/ftl/utils/ftl_bitmap.o 00:01:52.762 CC lib/ftl/utils/ftl_property.o 00:01:52.762 CC lib/ftl/utils/ftl_mempool.o 00:01:52.762 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:52.762 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:52.762 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:52.762 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:52.762 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:52.762 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:52.762 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:52.762 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:52.762 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:52.762 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:52.762 CC lib/ftl/base/ftl_base_dev.o 00:01:52.762 CC lib/ftl/base/ftl_base_bdev.o 00:01:52.762 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:52.763 CC lib/ftl/ftl_trace.o 00:01:53.329 LIB libspdk_scsi.a 00:01:53.329 SO libspdk_scsi.so.9.0 00:01:53.329 LIB libspdk_nbd.a 00:01:53.329 SO libspdk_nbd.so.7.0 00:01:53.329 SYMLINK libspdk_scsi.so 00:01:53.329 SYMLINK libspdk_nbd.so 00:01:53.587 LIB libspdk_ublk.a 00:01:53.587 SO libspdk_ublk.so.3.0 00:01:53.587 SYMLINK libspdk_ublk.so 00:01:53.587 LIB libspdk_ftl.a 00:01:53.587 CC lib/iscsi/conn.o 00:01:53.587 CC lib/iscsi/init_grp.o 00:01:53.587 CC lib/iscsi/iscsi.o 00:01:53.846 CC lib/iscsi/md5.o 00:01:53.846 CC lib/iscsi/param.o 00:01:53.846 CC lib/iscsi/portal_grp.o 00:01:53.846 CC lib/iscsi/tgt_node.o 00:01:53.846 CC lib/iscsi/iscsi_subsystem.o 00:01:53.846 CC lib/iscsi/task.o 00:01:53.846 CC lib/iscsi/iscsi_rpc.o 00:01:53.846 CC lib/vhost/vhost.o 00:01:53.846 CC lib/vhost/vhost_scsi.o 00:01:53.846 CC lib/vhost/vhost_rpc.o 00:01:53.846 CC lib/vhost/vhost_blk.o 00:01:53.846 CC lib/vhost/rte_vhost_user.o 00:01:53.846 SO libspdk_ftl.so.9.0 00:01:54.105 SYMLINK libspdk_ftl.so 00:01:54.671 LIB libspdk_nvmf.a 00:01:54.671 LIB libspdk_vhost.a 00:01:54.671 SO libspdk_vhost.so.8.0 00:01:54.671 SO libspdk_nvmf.so.19.0 00:01:54.671 SYMLINK libspdk_vhost.so 00:01:54.671 LIB libspdk_iscsi.a 00:01:54.671 SO libspdk_iscsi.so.8.0 00:01:54.671 SYMLINK libspdk_nvmf.so 00:01:54.931 SYMLINK libspdk_iscsi.so 00:01:55.497 CC module/vfu_device/vfu_virtio_blk.o 00:01:55.497 CC module/vfu_device/vfu_virtio_rpc.o 00:01:55.497 CC module/vfu_device/vfu_virtio.o 00:01:55.497 CC module/vfu_device/vfu_virtio_scsi.o 00:01:55.497 CC module/env_dpdk/env_dpdk_rpc.o 00:01:55.497 CC module/keyring/linux/keyring_rpc.o 00:01:55.497 CC module/keyring/linux/keyring.o 00:01:55.497 CC module/accel/iaa/accel_iaa.o 00:01:55.497 CC module/accel/iaa/accel_iaa_rpc.o 00:01:55.497 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:55.497 LIB libspdk_env_dpdk_rpc.a 00:01:55.497 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:55.497 CC module/scheduler/gscheduler/gscheduler.o 00:01:55.497 CC module/sock/posix/posix.o 00:01:55.497 CC module/keyring/file/keyring.o 00:01:55.497 CC module/keyring/file/keyring_rpc.o 00:01:55.497 CC module/accel/dsa/accel_dsa_rpc.o 00:01:55.497 CC module/accel/dsa/accel_dsa.o 00:01:55.497 CC module/accel/error/accel_error.o 00:01:55.497 CC module/accel/error/accel_error_rpc.o 00:01:55.497 CC module/blob/bdev/blob_bdev.o 00:01:55.497 CC module/accel/ioat/accel_ioat_rpc.o 00:01:55.497 CC module/accel/ioat/accel_ioat.o 00:01:55.497 SO libspdk_env_dpdk_rpc.so.6.0 00:01:55.497 SYMLINK libspdk_env_dpdk_rpc.so 00:01:55.497 LIB libspdk_keyring_linux.a 00:01:55.755 LIB libspdk_scheduler_gscheduler.a 00:01:55.755 LIB libspdk_keyring_file.a 00:01:55.755 SO libspdk_keyring_linux.so.1.0 00:01:55.756 LIB libspdk_scheduler_dpdk_governor.a 00:01:55.756 LIB libspdk_accel_error.a 00:01:55.756 LIB libspdk_accel_iaa.a 00:01:55.756 LIB libspdk_scheduler_dynamic.a 00:01:55.756 SO libspdk_keyring_file.so.1.0 00:01:55.756 SO libspdk_scheduler_gscheduler.so.4.0 00:01:55.756 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:55.756 SO libspdk_accel_error.so.2.0 00:01:55.756 SO libspdk_accel_iaa.so.3.0 00:01:55.756 LIB libspdk_accel_ioat.a 00:01:55.756 SO libspdk_scheduler_dynamic.so.4.0 00:01:55.756 SYMLINK libspdk_keyring_linux.so 00:01:55.756 LIB libspdk_accel_dsa.a 00:01:55.756 SYMLINK libspdk_keyring_file.so 00:01:55.756 SO libspdk_accel_ioat.so.6.0 00:01:55.756 SYMLINK libspdk_scheduler_gscheduler.so 00:01:55.756 LIB libspdk_blob_bdev.a 00:01:55.756 SO libspdk_accel_dsa.so.5.0 00:01:55.756 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:55.756 SYMLINK libspdk_scheduler_dynamic.so 00:01:55.756 SYMLINK libspdk_accel_error.so 00:01:55.756 SYMLINK libspdk_accel_iaa.so 00:01:55.756 SO libspdk_blob_bdev.so.11.0 00:01:55.756 SYMLINK libspdk_accel_ioat.so 00:01:55.756 SYMLINK libspdk_accel_dsa.so 00:01:55.756 SYMLINK libspdk_blob_bdev.so 00:01:55.756 LIB libspdk_vfu_device.a 00:01:55.756 SO libspdk_vfu_device.so.3.0 00:01:56.014 SYMLINK libspdk_vfu_device.so 00:01:56.014 LIB libspdk_sock_posix.a 00:01:56.014 SO libspdk_sock_posix.so.6.0 00:01:56.274 SYMLINK libspdk_sock_posix.so 00:01:56.274 CC module/bdev/split/vbdev_split.o 00:01:56.274 CC module/bdev/iscsi/bdev_iscsi.o 00:01:56.274 CC module/bdev/passthru/vbdev_passthru.o 00:01:56.274 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:56.274 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:56.274 CC module/bdev/split/vbdev_split_rpc.o 00:01:56.274 CC module/bdev/gpt/gpt.o 00:01:56.274 CC module/bdev/gpt/vbdev_gpt.o 00:01:56.274 CC module/bdev/error/vbdev_error.o 00:01:56.274 CC module/bdev/error/vbdev_error_rpc.o 00:01:56.274 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:56.274 CC module/bdev/delay/vbdev_delay.o 00:01:56.274 CC module/bdev/lvol/vbdev_lvol.o 00:01:56.274 CC module/bdev/null/bdev_null_rpc.o 00:01:56.274 CC module/bdev/null/bdev_null.o 00:01:56.274 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:56.274 CC module/bdev/raid/bdev_raid.o 00:01:56.274 CC module/bdev/aio/bdev_aio_rpc.o 00:01:56.274 CC module/bdev/aio/bdev_aio.o 00:01:56.274 CC module/bdev/raid/bdev_raid_sb.o 00:01:56.274 CC module/bdev/raid/bdev_raid_rpc.o 00:01:56.274 CC module/bdev/raid/raid1.o 00:01:56.274 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:56.274 CC module/bdev/raid/raid0.o 00:01:56.274 CC module/bdev/malloc/bdev_malloc.o 00:01:56.274 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:56.274 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:56.274 CC module/blobfs/bdev/blobfs_bdev.o 00:01:56.274 CC module/bdev/raid/concat.o 00:01:56.274 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:56.274 CC module/bdev/nvme/bdev_nvme.o 00:01:56.274 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:56.274 CC module/bdev/nvme/nvme_rpc.o 00:01:56.274 CC module/bdev/nvme/bdev_mdns_client.o 00:01:56.274 CC module/bdev/ftl/bdev_ftl.o 00:01:56.274 CC module/bdev/nvme/vbdev_opal.o 00:01:56.274 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:56.274 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:56.274 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:56.274 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:56.274 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:56.274 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:56.532 LIB libspdk_bdev_split.a 00:01:56.532 LIB libspdk_blobfs_bdev.a 00:01:56.532 LIB libspdk_bdev_null.a 00:01:56.532 LIB libspdk_bdev_gpt.a 00:01:56.532 SO libspdk_bdev_split.so.6.0 00:01:56.532 SO libspdk_blobfs_bdev.so.6.0 00:01:56.532 LIB libspdk_bdev_passthru.a 00:01:56.532 SO libspdk_bdev_null.so.6.0 00:01:56.532 LIB libspdk_bdev_error.a 00:01:56.532 SO libspdk_bdev_gpt.so.6.0 00:01:56.532 SO libspdk_bdev_passthru.so.6.0 00:01:56.532 LIB libspdk_bdev_zone_block.a 00:01:56.532 SYMLINK libspdk_bdev_split.so 00:01:56.532 LIB libspdk_bdev_ftl.a 00:01:56.532 SYMLINK libspdk_blobfs_bdev.so 00:01:56.532 SO libspdk_bdev_error.so.6.0 00:01:56.532 LIB libspdk_bdev_aio.a 00:01:56.532 LIB libspdk_bdev_iscsi.a 00:01:56.532 SYMLINK libspdk_bdev_null.so 00:01:56.532 SO libspdk_bdev_zone_block.so.6.0 00:01:56.532 SYMLINK libspdk_bdev_gpt.so 00:01:56.532 LIB libspdk_bdev_delay.a 00:01:56.532 SO libspdk_bdev_ftl.so.6.0 00:01:56.532 SO libspdk_bdev_aio.so.6.0 00:01:56.532 SYMLINK libspdk_bdev_passthru.so 00:01:56.791 LIB libspdk_bdev_malloc.a 00:01:56.791 SO libspdk_bdev_iscsi.so.6.0 00:01:56.791 SO libspdk_bdev_delay.so.6.0 00:01:56.791 SYMLINK libspdk_bdev_error.so 00:01:56.791 SYMLINK libspdk_bdev_zone_block.so 00:01:56.791 SYMLINK libspdk_bdev_aio.so 00:01:56.791 SO libspdk_bdev_malloc.so.6.0 00:01:56.791 SYMLINK libspdk_bdev_ftl.so 00:01:56.791 SYMLINK libspdk_bdev_iscsi.so 00:01:56.791 SYMLINK libspdk_bdev_delay.so 00:01:56.791 LIB libspdk_bdev_virtio.a 00:01:56.791 SYMLINK libspdk_bdev_malloc.so 00:01:56.791 LIB libspdk_bdev_lvol.a 00:01:56.791 SO libspdk_bdev_virtio.so.6.0 00:01:56.791 SO libspdk_bdev_lvol.so.6.0 00:01:56.791 SYMLINK libspdk_bdev_virtio.so 00:01:56.791 SYMLINK libspdk_bdev_lvol.so 00:01:57.051 LIB libspdk_bdev_raid.a 00:01:57.051 SO libspdk_bdev_raid.so.6.0 00:01:57.309 SYMLINK libspdk_bdev_raid.so 00:01:57.877 LIB libspdk_bdev_nvme.a 00:01:57.877 SO libspdk_bdev_nvme.so.7.0 00:01:58.136 SYMLINK libspdk_bdev_nvme.so 00:01:58.703 CC module/event/subsystems/sock/sock.o 00:01:58.703 CC module/event/subsystems/vmd/vmd.o 00:01:58.703 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:58.704 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:58.704 CC module/event/subsystems/iobuf/iobuf.o 00:01:58.704 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:58.704 CC module/event/subsystems/keyring/keyring.o 00:01:58.704 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:58.704 CC module/event/subsystems/scheduler/scheduler.o 00:01:58.704 LIB libspdk_event_vhost_blk.a 00:01:58.704 LIB libspdk_event_sock.a 00:01:58.704 LIB libspdk_event_keyring.a 00:01:58.704 LIB libspdk_event_vmd.a 00:01:58.704 LIB libspdk_event_iobuf.a 00:01:58.704 LIB libspdk_event_scheduler.a 00:01:58.704 LIB libspdk_event_vfu_tgt.a 00:01:58.704 SO libspdk_event_vhost_blk.so.3.0 00:01:58.704 SO libspdk_event_sock.so.5.0 00:01:58.962 SO libspdk_event_keyring.so.1.0 00:01:58.962 SO libspdk_event_vmd.so.6.0 00:01:58.962 SO libspdk_event_iobuf.so.3.0 00:01:58.962 SO libspdk_event_scheduler.so.4.0 00:01:58.963 SO libspdk_event_vfu_tgt.so.3.0 00:01:58.963 SYMLINK libspdk_event_vhost_blk.so 00:01:58.963 SYMLINK libspdk_event_sock.so 00:01:58.963 SYMLINK libspdk_event_keyring.so 00:01:58.963 SYMLINK libspdk_event_vfu_tgt.so 00:01:58.963 SYMLINK libspdk_event_vmd.so 00:01:58.963 SYMLINK libspdk_event_iobuf.so 00:01:58.963 SYMLINK libspdk_event_scheduler.so 00:01:59.221 CC module/event/subsystems/accel/accel.o 00:01:59.221 LIB libspdk_event_accel.a 00:01:59.485 SO libspdk_event_accel.so.6.0 00:01:59.485 SYMLINK libspdk_event_accel.so 00:01:59.742 CC module/event/subsystems/bdev/bdev.o 00:01:59.742 LIB libspdk_event_bdev.a 00:01:59.998 SO libspdk_event_bdev.so.6.0 00:01:59.998 SYMLINK libspdk_event_bdev.so 00:02:00.255 CC module/event/subsystems/scsi/scsi.o 00:02:00.255 CC module/event/subsystems/nbd/nbd.o 00:02:00.255 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:00.255 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:00.255 CC module/event/subsystems/ublk/ublk.o 00:02:00.255 LIB libspdk_event_scsi.a 00:02:00.255 LIB libspdk_event_nbd.a 00:02:00.513 SO libspdk_event_nbd.so.6.0 00:02:00.513 LIB libspdk_event_ublk.a 00:02:00.513 SO libspdk_event_scsi.so.6.0 00:02:00.513 SO libspdk_event_ublk.so.3.0 00:02:00.513 LIB libspdk_event_nvmf.a 00:02:00.513 SYMLINK libspdk_event_nbd.so 00:02:00.513 SYMLINK libspdk_event_scsi.so 00:02:00.513 SO libspdk_event_nvmf.so.6.0 00:02:00.513 SYMLINK libspdk_event_ublk.so 00:02:00.513 SYMLINK libspdk_event_nvmf.so 00:02:00.770 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:00.770 CC module/event/subsystems/iscsi/iscsi.o 00:02:00.770 LIB libspdk_event_vhost_scsi.a 00:02:01.027 SO libspdk_event_vhost_scsi.so.3.0 00:02:01.027 LIB libspdk_event_iscsi.a 00:02:01.027 SO libspdk_event_iscsi.so.6.0 00:02:01.027 SYMLINK libspdk_event_vhost_scsi.so 00:02:01.027 SYMLINK libspdk_event_iscsi.so 00:02:01.285 SO libspdk.so.6.0 00:02:01.285 SYMLINK libspdk.so 00:02:01.554 CC app/trace_record/trace_record.o 00:02:01.554 CC app/spdk_top/spdk_top.o 00:02:01.554 CC app/spdk_lspci/spdk_lspci.o 00:02:01.554 CXX app/trace/trace.o 00:02:01.554 TEST_HEADER include/spdk/accel.h 00:02:01.554 CC app/spdk_nvme_discover/discovery_aer.o 00:02:01.554 TEST_HEADER include/spdk/assert.h 00:02:01.554 TEST_HEADER include/spdk/accel_module.h 00:02:01.554 TEST_HEADER include/spdk/bdev.h 00:02:01.554 TEST_HEADER include/spdk/barrier.h 00:02:01.554 TEST_HEADER include/spdk/bdev_module.h 00:02:01.554 TEST_HEADER include/spdk/base64.h 00:02:01.554 TEST_HEADER include/spdk/bdev_zone.h 00:02:01.554 TEST_HEADER include/spdk/bit_array.h 00:02:01.554 CC app/spdk_nvme_identify/identify.o 00:02:01.554 TEST_HEADER include/spdk/bit_pool.h 00:02:01.554 TEST_HEADER include/spdk/blob_bdev.h 00:02:01.554 TEST_HEADER include/spdk/blobfs.h 00:02:01.554 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:01.554 CC test/rpc_client/rpc_client_test.o 00:02:01.554 TEST_HEADER include/spdk/conf.h 00:02:01.554 TEST_HEADER include/spdk/blob.h 00:02:01.554 TEST_HEADER include/spdk/config.h 00:02:01.554 TEST_HEADER include/spdk/cpuset.h 00:02:01.554 CC app/spdk_nvme_perf/perf.o 00:02:01.554 TEST_HEADER include/spdk/crc32.h 00:02:01.554 TEST_HEADER include/spdk/crc16.h 00:02:01.554 TEST_HEADER include/spdk/crc64.h 00:02:01.554 TEST_HEADER include/spdk/dif.h 00:02:01.554 TEST_HEADER include/spdk/dma.h 00:02:01.554 TEST_HEADER include/spdk/endian.h 00:02:01.554 TEST_HEADER include/spdk/env.h 00:02:01.554 TEST_HEADER include/spdk/env_dpdk.h 00:02:01.554 TEST_HEADER include/spdk/event.h 00:02:01.554 TEST_HEADER include/spdk/fd_group.h 00:02:01.554 TEST_HEADER include/spdk/fd.h 00:02:01.554 TEST_HEADER include/spdk/file.h 00:02:01.554 TEST_HEADER include/spdk/ftl.h 00:02:01.554 TEST_HEADER include/spdk/hexlify.h 00:02:01.554 TEST_HEADER include/spdk/idxd.h 00:02:01.554 TEST_HEADER include/spdk/histogram_data.h 00:02:01.554 TEST_HEADER include/spdk/gpt_spec.h 00:02:01.554 TEST_HEADER include/spdk/init.h 00:02:01.554 TEST_HEADER include/spdk/ioat_spec.h 00:02:01.554 TEST_HEADER include/spdk/ioat.h 00:02:01.554 TEST_HEADER include/spdk/idxd_spec.h 00:02:01.554 TEST_HEADER include/spdk/iscsi_spec.h 00:02:01.554 TEST_HEADER include/spdk/jsonrpc.h 00:02:01.554 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:01.554 TEST_HEADER include/spdk/keyring_module.h 00:02:01.554 TEST_HEADER include/spdk/json.h 00:02:01.554 TEST_HEADER include/spdk/keyring.h 00:02:01.554 TEST_HEADER include/spdk/likely.h 00:02:01.554 TEST_HEADER include/spdk/log.h 00:02:01.554 TEST_HEADER include/spdk/memory.h 00:02:01.554 TEST_HEADER include/spdk/lvol.h 00:02:01.554 TEST_HEADER include/spdk/mmio.h 00:02:01.554 TEST_HEADER include/spdk/notify.h 00:02:01.554 TEST_HEADER include/spdk/nbd.h 00:02:01.554 TEST_HEADER include/spdk/nvme.h 00:02:01.554 TEST_HEADER include/spdk/nvme_intel.h 00:02:01.554 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:01.554 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:01.554 TEST_HEADER include/spdk/nvme_zns.h 00:02:01.554 TEST_HEADER include/spdk/nvme_spec.h 00:02:01.554 CC app/nvmf_tgt/nvmf_main.o 00:02:01.554 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:01.554 TEST_HEADER include/spdk/nvmf_spec.h 00:02:01.554 TEST_HEADER include/spdk/nvmf.h 00:02:01.554 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:01.554 TEST_HEADER include/spdk/nvmf_transport.h 00:02:01.554 TEST_HEADER include/spdk/opal.h 00:02:01.554 CC app/spdk_dd/spdk_dd.o 00:02:01.554 TEST_HEADER include/spdk/pci_ids.h 00:02:01.554 TEST_HEADER include/spdk/pipe.h 00:02:01.554 TEST_HEADER include/spdk/opal_spec.h 00:02:01.554 TEST_HEADER include/spdk/queue.h 00:02:01.554 TEST_HEADER include/spdk/reduce.h 00:02:01.554 TEST_HEADER include/spdk/scheduler.h 00:02:01.554 TEST_HEADER include/spdk/scsi.h 00:02:01.554 TEST_HEADER include/spdk/rpc.h 00:02:01.554 CC app/iscsi_tgt/iscsi_tgt.o 00:02:01.554 TEST_HEADER include/spdk/sock.h 00:02:01.554 TEST_HEADER include/spdk/scsi_spec.h 00:02:01.554 TEST_HEADER include/spdk/stdinc.h 00:02:01.554 TEST_HEADER include/spdk/string.h 00:02:01.554 TEST_HEADER include/spdk/thread.h 00:02:01.554 TEST_HEADER include/spdk/trace_parser.h 00:02:01.554 TEST_HEADER include/spdk/tree.h 00:02:01.554 TEST_HEADER include/spdk/trace.h 00:02:01.554 TEST_HEADER include/spdk/util.h 00:02:01.554 TEST_HEADER include/spdk/ublk.h 00:02:01.554 TEST_HEADER include/spdk/uuid.h 00:02:01.554 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:01.554 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:01.554 TEST_HEADER include/spdk/version.h 00:02:01.554 TEST_HEADER include/spdk/vhost.h 00:02:01.554 TEST_HEADER include/spdk/vmd.h 00:02:01.554 TEST_HEADER include/spdk/xor.h 00:02:01.554 CXX test/cpp_headers/accel.o 00:02:01.554 TEST_HEADER include/spdk/zipf.h 00:02:01.554 CXX test/cpp_headers/accel_module.o 00:02:01.554 CXX test/cpp_headers/assert.o 00:02:01.554 CXX test/cpp_headers/barrier.o 00:02:01.554 CXX test/cpp_headers/bdev_module.o 00:02:01.554 CXX test/cpp_headers/bdev.o 00:02:01.554 CXX test/cpp_headers/bit_array.o 00:02:01.554 CXX test/cpp_headers/bdev_zone.o 00:02:01.554 CXX test/cpp_headers/base64.o 00:02:01.554 CXX test/cpp_headers/bit_pool.o 00:02:01.554 CXX test/cpp_headers/blob_bdev.o 00:02:01.554 CXX test/cpp_headers/blobfs_bdev.o 00:02:01.554 CXX test/cpp_headers/blob.o 00:02:01.554 CXX test/cpp_headers/conf.o 00:02:01.554 CXX test/cpp_headers/blobfs.o 00:02:01.554 CXX test/cpp_headers/cpuset.o 00:02:01.554 CXX test/cpp_headers/config.o 00:02:01.555 CXX test/cpp_headers/crc16.o 00:02:01.555 CXX test/cpp_headers/crc64.o 00:02:01.555 CXX test/cpp_headers/dma.o 00:02:01.555 CXX test/cpp_headers/crc32.o 00:02:01.555 CXX test/cpp_headers/env_dpdk.o 00:02:01.555 CXX test/cpp_headers/dif.o 00:02:01.555 CXX test/cpp_headers/endian.o 00:02:01.555 CXX test/cpp_headers/env.o 00:02:01.555 CXX test/cpp_headers/fd.o 00:02:01.555 CXX test/cpp_headers/fd_group.o 00:02:01.555 CXX test/cpp_headers/event.o 00:02:01.555 CXX test/cpp_headers/ftl.o 00:02:01.555 CXX test/cpp_headers/file.o 00:02:01.555 CXX test/cpp_headers/gpt_spec.o 00:02:01.555 CXX test/cpp_headers/hexlify.o 00:02:01.555 CC app/spdk_tgt/spdk_tgt.o 00:02:01.555 CXX test/cpp_headers/histogram_data.o 00:02:01.555 CXX test/cpp_headers/idxd.o 00:02:01.555 CXX test/cpp_headers/idxd_spec.o 00:02:01.555 CXX test/cpp_headers/ioat.o 00:02:01.555 CXX test/cpp_headers/ioat_spec.o 00:02:01.555 CXX test/cpp_headers/init.o 00:02:01.555 CXX test/cpp_headers/json.o 00:02:01.555 CXX test/cpp_headers/jsonrpc.o 00:02:01.555 CXX test/cpp_headers/keyring.o 00:02:01.555 CXX test/cpp_headers/iscsi_spec.o 00:02:01.555 CXX test/cpp_headers/keyring_module.o 00:02:01.555 CXX test/cpp_headers/likely.o 00:02:01.555 CXX test/cpp_headers/log.o 00:02:01.555 CXX test/cpp_headers/memory.o 00:02:01.555 CXX test/cpp_headers/lvol.o 00:02:01.555 CXX test/cpp_headers/mmio.o 00:02:01.555 CXX test/cpp_headers/notify.o 00:02:01.555 CXX test/cpp_headers/nbd.o 00:02:01.555 CXX test/cpp_headers/nvme_ocssd.o 00:02:01.555 CXX test/cpp_headers/nvme_intel.o 00:02:01.555 CXX test/cpp_headers/nvme.o 00:02:01.555 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:01.555 CXX test/cpp_headers/nvme_spec.o 00:02:01.555 CXX test/cpp_headers/nvmf_cmd.o 00:02:01.555 CXX test/cpp_headers/nvme_zns.o 00:02:01.555 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:01.555 CXX test/cpp_headers/nvmf.o 00:02:01.555 CXX test/cpp_headers/nvmf_transport.o 00:02:01.555 CXX test/cpp_headers/nvmf_spec.o 00:02:01.555 CXX test/cpp_headers/opal.o 00:02:01.555 CXX test/cpp_headers/opal_spec.o 00:02:01.555 CXX test/cpp_headers/pipe.o 00:02:01.555 CXX test/cpp_headers/pci_ids.o 00:02:01.555 CXX test/cpp_headers/queue.o 00:02:01.555 CC examples/ioat/perf/perf.o 00:02:01.555 CXX test/cpp_headers/reduce.o 00:02:01.828 CC test/app/histogram_perf/histogram_perf.o 00:02:01.828 CC examples/util/zipf/zipf.o 00:02:01.828 CC examples/ioat/verify/verify.o 00:02:01.828 CC app/fio/nvme/fio_plugin.o 00:02:01.828 CC test/env/memory/memory_ut.o 00:02:01.828 CC test/thread/poller_perf/poller_perf.o 00:02:01.828 CXX test/cpp_headers/rpc.o 00:02:01.828 CC test/env/pci/pci_ut.o 00:02:01.828 CC test/app/jsoncat/jsoncat.o 00:02:01.828 CC test/env/vtophys/vtophys.o 00:02:01.828 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:01.828 CC app/fio/bdev/fio_plugin.o 00:02:01.828 CC test/app/stub/stub.o 00:02:01.828 CC test/app/bdev_svc/bdev_svc.o 00:02:01.828 CC test/dma/test_dma/test_dma.o 00:02:01.828 LINK spdk_lspci 00:02:02.143 LINK rpc_client_test 00:02:02.143 LINK interrupt_tgt 00:02:02.143 LINK nvmf_tgt 00:02:02.143 CC test/env/mem_callbacks/mem_callbacks.o 00:02:02.143 LINK spdk_nvme_discover 00:02:02.143 LINK iscsi_tgt 00:02:02.143 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:02.143 CXX test/cpp_headers/scheduler.o 00:02:02.143 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:02.143 CXX test/cpp_headers/scsi.o 00:02:02.143 CXX test/cpp_headers/scsi_spec.o 00:02:02.143 CXX test/cpp_headers/sock.o 00:02:02.143 CXX test/cpp_headers/stdinc.o 00:02:02.143 CXX test/cpp_headers/string.o 00:02:02.143 CXX test/cpp_headers/thread.o 00:02:02.143 CXX test/cpp_headers/trace.o 00:02:02.143 CXX test/cpp_headers/trace_parser.o 00:02:02.143 CXX test/cpp_headers/tree.o 00:02:02.143 CXX test/cpp_headers/ublk.o 00:02:02.143 CXX test/cpp_headers/util.o 00:02:02.143 CXX test/cpp_headers/version.o 00:02:02.143 CXX test/cpp_headers/uuid.o 00:02:02.143 CXX test/cpp_headers/vfio_user_spec.o 00:02:02.143 CXX test/cpp_headers/vfio_user_pci.o 00:02:02.143 CXX test/cpp_headers/vhost.o 00:02:02.143 CXX test/cpp_headers/vmd.o 00:02:02.143 CXX test/cpp_headers/xor.o 00:02:02.143 CXX test/cpp_headers/zipf.o 00:02:02.143 LINK env_dpdk_post_init 00:02:02.143 LINK poller_perf 00:02:02.143 LINK spdk_trace_record 00:02:02.143 LINK ioat_perf 00:02:02.410 LINK histogram_perf 00:02:02.410 LINK jsoncat 00:02:02.410 LINK zipf 00:02:02.410 LINK vtophys 00:02:02.410 LINK spdk_tgt 00:02:02.410 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:02.410 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:02.410 LINK verify 00:02:02.410 LINK stub 00:02:02.410 LINK bdev_svc 00:02:02.410 LINK spdk_dd 00:02:02.410 LINK spdk_trace 00:02:02.410 LINK pci_ut 00:02:02.668 LINK test_dma 00:02:02.668 LINK spdk_nvme 00:02:02.668 LINK spdk_bdev 00:02:02.668 CC test/event/event_perf/event_perf.o 00:02:02.668 LINK nvme_fuzz 00:02:02.668 CC test/event/reactor_perf/reactor_perf.o 00:02:02.668 CC test/event/reactor/reactor.o 00:02:02.668 CC test/event/app_repeat/app_repeat.o 00:02:02.668 CC test/event/scheduler/scheduler.o 00:02:02.668 LINK spdk_nvme_perf 00:02:02.668 LINK vhost_fuzz 00:02:02.668 LINK spdk_nvme_identify 00:02:02.926 CC examples/sock/hello_world/hello_sock.o 00:02:02.926 CC examples/idxd/perf/perf.o 00:02:02.926 CC examples/vmd/lsvmd/lsvmd.o 00:02:02.926 CC examples/vmd/led/led.o 00:02:02.926 LINK event_perf 00:02:02.926 LINK reactor_perf 00:02:02.926 LINK reactor 00:02:02.926 LINK spdk_top 00:02:02.926 CC examples/thread/thread/thread_ex.o 00:02:02.926 LINK mem_callbacks 00:02:02.926 CC app/vhost/vhost.o 00:02:02.926 LINK app_repeat 00:02:02.926 LINK lsvmd 00:02:02.926 LINK scheduler 00:02:02.926 LINK led 00:02:02.926 LINK memory_ut 00:02:02.926 LINK hello_sock 00:02:02.926 CC test/nvme/compliance/nvme_compliance.o 00:02:02.926 CC test/nvme/fdp/fdp.o 00:02:03.185 CC test/nvme/startup/startup.o 00:02:03.185 CC test/nvme/reset/reset.o 00:02:03.185 CC test/nvme/err_injection/err_injection.o 00:02:03.185 CC test/nvme/aer/aer.o 00:02:03.185 CC test/nvme/e2edp/nvme_dp.o 00:02:03.185 CC test/nvme/fused_ordering/fused_ordering.o 00:02:03.185 CC test/nvme/overhead/overhead.o 00:02:03.185 CC test/nvme/cuse/cuse.o 00:02:03.185 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:03.185 CC test/nvme/boot_partition/boot_partition.o 00:02:03.185 CC test/nvme/sgl/sgl.o 00:02:03.185 CC test/nvme/connect_stress/connect_stress.o 00:02:03.185 CC test/nvme/simple_copy/simple_copy.o 00:02:03.185 CC test/nvme/reserve/reserve.o 00:02:03.185 CC test/blobfs/mkfs/mkfs.o 00:02:03.185 CC test/accel/dif/dif.o 00:02:03.185 LINK idxd_perf 00:02:03.185 LINK thread 00:02:03.185 LINK vhost 00:02:03.185 CC test/lvol/esnap/esnap.o 00:02:03.185 LINK startup 00:02:03.185 LINK err_injection 00:02:03.185 LINK boot_partition 00:02:03.185 LINK connect_stress 00:02:03.185 LINK doorbell_aers 00:02:03.185 LINK fused_ordering 00:02:03.185 LINK reserve 00:02:03.185 LINK mkfs 00:02:03.185 LINK simple_copy 00:02:03.185 LINK aer 00:02:03.444 LINK nvme_compliance 00:02:03.444 LINK reset 00:02:03.444 LINK overhead 00:02:03.444 LINK nvme_dp 00:02:03.444 LINK sgl 00:02:03.444 LINK fdp 00:02:03.444 CC examples/nvme/hotplug/hotplug.o 00:02:03.444 CC examples/nvme/abort/abort.o 00:02:03.444 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:03.444 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:03.444 CC examples/nvme/reconnect/reconnect.o 00:02:03.444 CC examples/nvme/hello_world/hello_world.o 00:02:03.444 CC examples/nvme/arbitration/arbitration.o 00:02:03.444 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:03.444 LINK dif 00:02:03.702 LINK iscsi_fuzz 00:02:03.702 CC examples/accel/perf/accel_perf.o 00:02:03.702 CC examples/blob/hello_world/hello_blob.o 00:02:03.702 CC examples/blob/cli/blobcli.o 00:02:03.702 LINK cmb_copy 00:02:03.702 LINK pmr_persistence 00:02:03.702 LINK hotplug 00:02:03.702 LINK hello_world 00:02:03.702 LINK abort 00:02:03.702 LINK arbitration 00:02:03.702 LINK reconnect 00:02:03.959 LINK nvme_manage 00:02:03.959 LINK hello_blob 00:02:03.959 CC test/bdev/bdevio/bdevio.o 00:02:03.959 LINK accel_perf 00:02:03.959 LINK blobcli 00:02:03.959 LINK cuse 00:02:04.217 LINK bdevio 00:02:04.475 CC examples/bdev/bdevperf/bdevperf.o 00:02:04.475 CC examples/bdev/hello_world/hello_bdev.o 00:02:04.733 LINK hello_bdev 00:02:04.991 LINK bdevperf 00:02:05.557 CC examples/nvmf/nvmf/nvmf.o 00:02:05.815 LINK nvmf 00:02:06.748 LINK esnap 00:02:06.748 00:02:06.748 real 0m43.702s 00:02:06.748 user 6m30.350s 00:02:06.748 sys 3m21.781s 00:02:06.748 22:18:30 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:06.748 22:18:30 make -- common/autotest_common.sh@10 -- $ set +x 00:02:06.748 ************************************ 00:02:06.748 END TEST make 00:02:06.748 ************************************ 00:02:07.007 22:18:30 -- common/autotest_common.sh@1142 -- $ return 0 00:02:07.007 22:18:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:07.007 22:18:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:07.007 22:18:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:07.007 22:18:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.007 22:18:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:07.007 22:18:30 -- pm/common@44 -- $ pid=3907516 00:02:07.007 22:18:30 -- pm/common@50 -- $ kill -TERM 3907516 00:02:07.007 22:18:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.007 22:18:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:07.007 22:18:30 -- pm/common@44 -- $ pid=3907518 00:02:07.007 22:18:30 -- pm/common@50 -- $ kill -TERM 3907518 00:02:07.007 22:18:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.007 22:18:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:07.007 22:18:30 -- pm/common@44 -- $ pid=3907520 00:02:07.007 22:18:30 -- pm/common@50 -- $ kill -TERM 3907520 00:02:07.007 22:18:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.007 22:18:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:07.007 22:18:30 -- pm/common@44 -- $ pid=3907542 00:02:07.007 22:18:30 -- pm/common@50 -- $ sudo -E kill -TERM 3907542 00:02:07.007 22:18:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:07.007 22:18:30 -- nvmf/common.sh@7 -- # uname -s 00:02:07.007 22:18:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:07.007 22:18:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:07.007 22:18:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:07.007 22:18:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:07.007 22:18:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:07.007 22:18:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:07.007 22:18:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:07.007 22:18:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:07.007 22:18:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:07.007 22:18:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:07.007 22:18:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:07.007 22:18:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:07.007 22:18:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:07.007 22:18:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:07.007 22:18:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:07.007 22:18:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:07.007 22:18:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:07.007 22:18:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:07.007 22:18:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.007 22:18:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.007 22:18:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.007 22:18:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.007 22:18:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.007 22:18:30 -- paths/export.sh@5 -- # export PATH 00:02:07.007 22:18:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.007 22:18:30 -- nvmf/common.sh@47 -- # : 0 00:02:07.007 22:18:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:07.007 22:18:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:07.007 22:18:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:07.007 22:18:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:07.007 22:18:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:07.007 22:18:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:07.007 22:18:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:07.007 22:18:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:07.007 22:18:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:07.007 22:18:30 -- spdk/autotest.sh@32 -- # uname -s 00:02:07.007 22:18:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:07.007 22:18:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:07.007 22:18:30 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:07.007 22:18:30 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:07.007 22:18:30 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:07.007 22:18:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:07.007 22:18:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:07.007 22:18:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:07.007 22:18:30 -- spdk/autotest.sh@48 -- # udevadm_pid=3967014 00:02:07.007 22:18:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:07.007 22:18:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:07.007 22:18:30 -- pm/common@17 -- # local monitor 00:02:07.007 22:18:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.007 22:18:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.007 22:18:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.007 22:18:30 -- pm/common@21 -- # date +%s 00:02:07.007 22:18:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.007 22:18:30 -- pm/common@21 -- # date +%s 00:02:07.007 22:18:30 -- pm/common@25 -- # sleep 1 00:02:07.007 22:18:30 -- pm/common@21 -- # date +%s 00:02:07.007 22:18:30 -- pm/common@21 -- # date +%s 00:02:07.007 22:18:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721074710 00:02:07.007 22:18:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721074710 00:02:07.007 22:18:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721074710 00:02:07.007 22:18:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721074710 00:02:07.007 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721074710_collect-vmstat.pm.log 00:02:07.007 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721074710_collect-cpu-load.pm.log 00:02:07.007 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721074710_collect-cpu-temp.pm.log 00:02:07.008 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721074710_collect-bmc-pm.bmc.pm.log 00:02:07.946 22:18:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:07.946 22:18:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:07.946 22:18:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:07.946 22:18:31 -- common/autotest_common.sh@10 -- # set +x 00:02:07.946 22:18:31 -- spdk/autotest.sh@59 -- # create_test_list 00:02:07.946 22:18:31 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:07.946 22:18:31 -- common/autotest_common.sh@10 -- # set +x 00:02:08.206 22:18:31 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:08.206 22:18:31 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.206 22:18:31 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.206 22:18:31 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:08.206 22:18:31 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.206 22:18:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:08.206 22:18:31 -- common/autotest_common.sh@1455 -- # uname 00:02:08.206 22:18:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:08.206 22:18:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:08.206 22:18:31 -- common/autotest_common.sh@1475 -- # uname 00:02:08.206 22:18:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:08.206 22:18:31 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:08.206 22:18:31 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:08.206 22:18:31 -- spdk/autotest.sh@72 -- # hash lcov 00:02:08.206 22:18:31 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:08.206 22:18:31 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:08.206 --rc lcov_branch_coverage=1 00:02:08.206 --rc lcov_function_coverage=1 00:02:08.206 --rc genhtml_branch_coverage=1 00:02:08.206 --rc genhtml_function_coverage=1 00:02:08.206 --rc genhtml_legend=1 00:02:08.206 --rc geninfo_all_blocks=1 00:02:08.206 ' 00:02:08.206 22:18:31 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:08.206 --rc lcov_branch_coverage=1 00:02:08.206 --rc lcov_function_coverage=1 00:02:08.206 --rc genhtml_branch_coverage=1 00:02:08.206 --rc genhtml_function_coverage=1 00:02:08.206 --rc genhtml_legend=1 00:02:08.206 --rc geninfo_all_blocks=1 00:02:08.206 ' 00:02:08.206 22:18:31 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:08.206 --rc lcov_branch_coverage=1 00:02:08.206 --rc lcov_function_coverage=1 00:02:08.206 --rc genhtml_branch_coverage=1 00:02:08.206 --rc genhtml_function_coverage=1 00:02:08.206 --rc genhtml_legend=1 00:02:08.206 --rc geninfo_all_blocks=1 00:02:08.206 --no-external' 00:02:08.206 22:18:31 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:08.206 --rc lcov_branch_coverage=1 00:02:08.206 --rc lcov_function_coverage=1 00:02:08.206 --rc genhtml_branch_coverage=1 00:02:08.206 --rc genhtml_function_coverage=1 00:02:08.206 --rc genhtml_legend=1 00:02:08.206 --rc geninfo_all_blocks=1 00:02:08.206 --no-external' 00:02:08.206 22:18:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:08.206 lcov: LCOV version 1.14 00:02:08.206 22:18:32 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:12.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:12.417 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:12.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:12.417 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:12.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:12.417 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:12.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:12.417 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:12.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:12.417 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:12.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:12.417 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:12.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:12.417 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:12.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:12.418 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:12.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:12.419 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:27.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:27.297 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:32.570 22:18:56 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:32.570 22:18:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:32.570 22:18:56 -- common/autotest_common.sh@10 -- # set +x 00:02:32.864 22:18:56 -- spdk/autotest.sh@91 -- # rm -f 00:02:32.864 22:18:56 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:35.401 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:35.401 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:35.401 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:35.401 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:35.401 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:35.401 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:35.401 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:35.401 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:35.401 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:35.401 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:35.401 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:35.401 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:35.401 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:35.401 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:35.660 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:35.660 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:35.660 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:35.660 22:18:59 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:35.660 22:18:59 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:35.660 22:18:59 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:35.660 22:18:59 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:35.660 22:18:59 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:35.660 22:18:59 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:35.660 22:18:59 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:35.660 22:18:59 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:35.660 22:18:59 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:35.660 22:18:59 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:35.660 22:18:59 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:35.660 22:18:59 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:35.660 22:18:59 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:35.660 22:18:59 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:35.660 22:18:59 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:35.660 No valid GPT data, bailing 00:02:35.660 22:18:59 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:35.660 22:18:59 -- scripts/common.sh@391 -- # pt= 00:02:35.660 22:18:59 -- scripts/common.sh@392 -- # return 1 00:02:35.660 22:18:59 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:35.660 1+0 records in 00:02:35.660 1+0 records out 00:02:35.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0022862 s, 459 MB/s 00:02:35.660 22:18:59 -- spdk/autotest.sh@118 -- # sync 00:02:35.660 22:18:59 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:35.660 22:18:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:35.660 22:18:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:40.931 22:19:03 -- spdk/autotest.sh@124 -- # uname -s 00:02:40.931 22:19:03 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:40.931 22:19:03 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:40.931 22:19:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:40.931 22:19:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:40.932 22:19:03 -- common/autotest_common.sh@10 -- # set +x 00:02:40.932 ************************************ 00:02:40.932 START TEST setup.sh 00:02:40.932 ************************************ 00:02:40.932 22:19:03 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:40.932 * Looking for test storage... 00:02:40.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:40.932 22:19:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:40.932 22:19:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:40.932 22:19:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:40.932 22:19:04 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:40.932 22:19:04 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:40.932 22:19:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:40.932 ************************************ 00:02:40.932 START TEST acl 00:02:40.932 ************************************ 00:02:40.932 22:19:04 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:40.932 * Looking for test storage... 00:02:40.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:40.932 22:19:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:40.932 22:19:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:40.932 22:19:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:40.932 22:19:04 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:40.932 22:19:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:40.932 22:19:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:40.932 22:19:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:40.932 22:19:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:40.932 22:19:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:40.932 22:19:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:40.932 22:19:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:40.932 22:19:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:40.932 22:19:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:40.932 22:19:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:40.932 22:19:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:40.932 22:19:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:43.465 22:19:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:43.465 22:19:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:43.465 22:19:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:43.465 22:19:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:43.465 22:19:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.465 22:19:07 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:45.999 Hugepages 00:02:45.999 node hugesize free / total 00:02:45.999 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:45.999 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:45.999 22:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.999 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:45.999 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:45.999 22:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.999 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:45.999 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:45.999 22:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.999 00:02:45.999 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:45.999 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:45.999 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:45.999 22:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:46.257 22:19:10 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:46.257 22:19:10 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:46.257 22:19:10 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:46.257 22:19:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:46.257 ************************************ 00:02:46.257 START TEST denied 00:02:46.257 ************************************ 00:02:46.257 22:19:10 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:46.257 22:19:10 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:46.257 22:19:10 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:46.257 22:19:10 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:46.257 22:19:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.257 22:19:10 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:49.535 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:49.535 22:19:12 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:49.535 22:19:12 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:49.535 22:19:12 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:49.535 22:19:12 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:49.535 22:19:12 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:49.535 22:19:12 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:49.535 22:19:12 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:49.535 22:19:12 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:49.535 22:19:12 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:49.535 22:19:12 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:52.846 00:02:52.846 real 0m6.319s 00:02:52.846 user 0m2.018s 00:02:52.846 sys 0m3.574s 00:02:52.846 22:19:16 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:52.846 22:19:16 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:52.846 ************************************ 00:02:52.846 END TEST denied 00:02:52.846 ************************************ 00:02:52.846 22:19:16 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:52.846 22:19:16 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:52.846 22:19:16 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:52.846 22:19:16 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.846 22:19:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:52.846 ************************************ 00:02:52.846 START TEST allowed 00:02:52.846 ************************************ 00:02:52.846 22:19:16 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:52.846 22:19:16 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:52.846 22:19:16 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:52.846 22:19:16 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:52.846 22:19:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.847 22:19:16 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.032 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:57.032 22:19:20 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:57.032 22:19:20 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:57.032 22:19:20 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:57.032 22:19:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.032 22:19:20 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.636 00:02:59.636 real 0m6.525s 00:02:59.636 user 0m1.884s 00:02:59.636 sys 0m3.725s 00:02:59.636 22:19:23 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:59.636 22:19:23 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:59.636 ************************************ 00:02:59.636 END TEST allowed 00:02:59.636 ************************************ 00:02:59.636 22:19:23 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:59.636 00:02:59.636 real 0m19.026s 00:02:59.636 user 0m6.233s 00:02:59.636 sys 0m11.328s 00:02:59.636 22:19:23 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:59.636 22:19:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:59.636 ************************************ 00:02:59.636 END TEST acl 00:02:59.636 ************************************ 00:02:59.636 22:19:23 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:02:59.636 22:19:23 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:59.636 22:19:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.636 22:19:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.636 22:19:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:59.636 ************************************ 00:02:59.636 START TEST hugepages 00:02:59.636 ************************************ 00:02:59.636 22:19:23 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:59.636 * Looking for test storage... 00:02:59.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173359720 kB' 'MemAvailable: 176232784 kB' 'Buffers: 3896 kB' 'Cached: 10194736 kB' 'SwapCached: 0 kB' 'Active: 7208452 kB' 'Inactive: 3507524 kB' 'Active(anon): 6816444 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520656 kB' 'Mapped: 217620 kB' 'Shmem: 6299100 kB' 'KReclaimable: 235968 kB' 'Slab: 825096 kB' 'SReclaimable: 235968 kB' 'SUnreclaim: 589128 kB' 'KernelStack: 20480 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8351792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315340 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.636 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.637 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:59.638 22:19:23 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:59.638 22:19:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:59.638 22:19:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.638 22:19:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:59.638 ************************************ 00:02:59.638 START TEST default_setup 00:02:59.638 ************************************ 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.638 22:19:23 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.174 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:02.174 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:02.744 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:03.009 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:03.009 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:03.009 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.009 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.009 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:03.009 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:03.009 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:03.009 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.009 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.009 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.009 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.009 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175498472 kB' 'MemAvailable: 178371520 kB' 'Buffers: 3896 kB' 'Cached: 10194836 kB' 'SwapCached: 0 kB' 'Active: 7226720 kB' 'Inactive: 3507524 kB' 'Active(anon): 6834712 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538856 kB' 'Mapped: 217764 kB' 'Shmem: 6299200 kB' 'KReclaimable: 235936 kB' 'Slab: 824276 kB' 'SReclaimable: 235936 kB' 'SUnreclaim: 588340 kB' 'KernelStack: 20608 kB' 'PageTables: 9352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8369060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315292 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.010 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175500420 kB' 'MemAvailable: 178373468 kB' 'Buffers: 3896 kB' 'Cached: 10194840 kB' 'SwapCached: 0 kB' 'Active: 7226112 kB' 'Inactive: 3507524 kB' 'Active(anon): 6834104 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538256 kB' 'Mapped: 217748 kB' 'Shmem: 6299204 kB' 'KReclaimable: 235936 kB' 'Slab: 824308 kB' 'SReclaimable: 235936 kB' 'SUnreclaim: 588372 kB' 'KernelStack: 20592 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8369080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315260 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.011 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.012 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175500488 kB' 'MemAvailable: 178373536 kB' 'Buffers: 3896 kB' 'Cached: 10194856 kB' 'SwapCached: 0 kB' 'Active: 7226116 kB' 'Inactive: 3507524 kB' 'Active(anon): 6834108 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538260 kB' 'Mapped: 217748 kB' 'Shmem: 6299220 kB' 'KReclaimable: 235936 kB' 'Slab: 824308 kB' 'SReclaimable: 235936 kB' 'SUnreclaim: 588372 kB' 'KernelStack: 20592 kB' 'PageTables: 9288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8369100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315260 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.013 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.014 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:03.015 nr_hugepages=1024 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.015 resv_hugepages=0 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.015 surplus_hugepages=0 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.015 anon_hugepages=0 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175500252 kB' 'MemAvailable: 178373300 kB' 'Buffers: 3896 kB' 'Cached: 10194880 kB' 'SwapCached: 0 kB' 'Active: 7225980 kB' 'Inactive: 3507524 kB' 'Active(anon): 6833972 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538096 kB' 'Mapped: 217688 kB' 'Shmem: 6299244 kB' 'KReclaimable: 235936 kB' 'Slab: 824284 kB' 'SReclaimable: 235936 kB' 'SUnreclaim: 588348 kB' 'KernelStack: 20528 kB' 'PageTables: 9128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8369124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315260 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.015 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:03.016 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86101084 kB' 'MemUsed: 11561600 kB' 'SwapCached: 0 kB' 'Active: 5031388 kB' 'Inactive: 3335448 kB' 'Active(anon): 4873848 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8192492 kB' 'Mapped: 84888 kB' 'AnonPages: 177556 kB' 'Shmem: 4699504 kB' 'KernelStack: 10872 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126920 kB' 'Slab: 392440 kB' 'SReclaimable: 126920 kB' 'SUnreclaim: 265520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.017 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:03.018 node0=1024 expecting 1024 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:03.018 00:03:03.018 real 0m3.550s 00:03:03.018 user 0m1.032s 00:03:03.018 sys 0m1.768s 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:03.018 22:19:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:03.018 ************************************ 00:03:03.018 END TEST default_setup 00:03:03.018 ************************************ 00:03:03.018 22:19:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:03.018 22:19:26 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:03.018 22:19:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:03.018 22:19:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:03.018 22:19:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:03.018 ************************************ 00:03:03.018 START TEST per_node_1G_alloc 00:03:03.018 ************************************ 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.018 22:19:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.560 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.560 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:05.560 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175493952 kB' 'MemAvailable: 178367000 kB' 'Buffers: 3896 kB' 'Cached: 10194980 kB' 'SwapCached: 0 kB' 'Active: 7226476 kB' 'Inactive: 3507524 kB' 'Active(anon): 6834468 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538880 kB' 'Mapped: 217864 kB' 'Shmem: 6299344 kB' 'KReclaimable: 235936 kB' 'Slab: 823548 kB' 'SReclaimable: 235936 kB' 'SUnreclaim: 587612 kB' 'KernelStack: 20496 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8369720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.560 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.561 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175493720 kB' 'MemAvailable: 178366768 kB' 'Buffers: 3896 kB' 'Cached: 10194984 kB' 'SwapCached: 0 kB' 'Active: 7226260 kB' 'Inactive: 3507524 kB' 'Active(anon): 6834252 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538160 kB' 'Mapped: 217620 kB' 'Shmem: 6299348 kB' 'KReclaimable: 235936 kB' 'Slab: 823468 kB' 'SReclaimable: 235936 kB' 'SUnreclaim: 587532 kB' 'KernelStack: 20512 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8369740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315532 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.562 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.563 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175496624 kB' 'MemAvailable: 178369672 kB' 'Buffers: 3896 kB' 'Cached: 10195000 kB' 'SwapCached: 0 kB' 'Active: 7227692 kB' 'Inactive: 3507524 kB' 'Active(anon): 6835684 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539576 kB' 'Mapped: 218204 kB' 'Shmem: 6299364 kB' 'KReclaimable: 235936 kB' 'Slab: 823604 kB' 'SReclaimable: 235936 kB' 'SUnreclaim: 587668 kB' 'KernelStack: 20544 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8371912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.564 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.565 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:05.566 nr_hugepages=1024 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.566 resv_hugepages=0 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.566 surplus_hugepages=0 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.566 anon_hugepages=0 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175493600 kB' 'MemAvailable: 178366648 kB' 'Buffers: 3896 kB' 'Cached: 10195000 kB' 'SwapCached: 0 kB' 'Active: 7230836 kB' 'Inactive: 3507524 kB' 'Active(anon): 6838828 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542748 kB' 'Mapped: 218204 kB' 'Shmem: 6299364 kB' 'KReclaimable: 235936 kB' 'Slab: 823604 kB' 'SReclaimable: 235936 kB' 'SUnreclaim: 587668 kB' 'KernelStack: 20560 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8374836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.566 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.828 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.829 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87135496 kB' 'MemUsed: 10527188 kB' 'SwapCached: 0 kB' 'Active: 5031144 kB' 'Inactive: 3335448 kB' 'Active(anon): 4873604 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8192608 kB' 'Mapped: 84888 kB' 'AnonPages: 177124 kB' 'Shmem: 4699620 kB' 'KernelStack: 10840 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126920 kB' 'Slab: 391696 kB' 'SReclaimable: 126920 kB' 'SUnreclaim: 264776 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.830 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88355836 kB' 'MemUsed: 5362632 kB' 'SwapCached: 0 kB' 'Active: 2195488 kB' 'Inactive: 172076 kB' 'Active(anon): 1961020 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172076 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2006336 kB' 'Mapped: 132812 kB' 'AnonPages: 361376 kB' 'Shmem: 1599792 kB' 'KernelStack: 9688 kB' 'PageTables: 4820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109016 kB' 'Slab: 431892 kB' 'SReclaimable: 109016 kB' 'SUnreclaim: 322876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.831 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.832 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.833 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.833 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.833 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.833 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:05.833 node0=512 expecting 512 00:03:05.833 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.833 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.833 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.833 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:05.833 node1=512 expecting 512 00:03:05.833 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:05.833 00:03:05.833 real 0m2.685s 00:03:05.833 user 0m1.087s 00:03:05.833 sys 0m1.593s 00:03:05.833 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.833 22:19:29 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:05.833 ************************************ 00:03:05.833 END TEST per_node_1G_alloc 00:03:05.833 ************************************ 00:03:05.833 22:19:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:05.833 22:19:29 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:05.833 22:19:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.833 22:19:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.833 22:19:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:05.833 ************************************ 00:03:05.833 START TEST even_2G_alloc 00:03:05.833 ************************************ 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.833 22:19:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.368 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:08.368 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:08.368 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175474872 kB' 'MemAvailable: 178347904 kB' 'Buffers: 3896 kB' 'Cached: 10195128 kB' 'SwapCached: 0 kB' 'Active: 7223248 kB' 'Inactive: 3507524 kB' 'Active(anon): 6831240 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535048 kB' 'Mapped: 216568 kB' 'Shmem: 6299492 kB' 'KReclaimable: 235904 kB' 'Slab: 824076 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 588172 kB' 'KernelStack: 20368 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8359032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.634 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.635 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175474368 kB' 'MemAvailable: 178347400 kB' 'Buffers: 3896 kB' 'Cached: 10195132 kB' 'SwapCached: 0 kB' 'Active: 7223348 kB' 'Inactive: 3507524 kB' 'Active(anon): 6831340 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535160 kB' 'Mapped: 216560 kB' 'Shmem: 6299496 kB' 'KReclaimable: 235904 kB' 'Slab: 824120 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 588216 kB' 'KernelStack: 20336 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8359048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.636 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.637 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175476640 kB' 'MemAvailable: 178349672 kB' 'Buffers: 3896 kB' 'Cached: 10195148 kB' 'SwapCached: 0 kB' 'Active: 7223996 kB' 'Inactive: 3507524 kB' 'Active(anon): 6831988 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535828 kB' 'Mapped: 216560 kB' 'Shmem: 6299512 kB' 'KReclaimable: 235904 kB' 'Slab: 824120 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 588216 kB' 'KernelStack: 20368 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8360192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315452 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.638 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.639 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.640 nr_hugepages=1024 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.640 resv_hugepages=0 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.640 surplus_hugepages=0 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.640 anon_hugepages=0 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175476380 kB' 'MemAvailable: 178349412 kB' 'Buffers: 3896 kB' 'Cached: 10195172 kB' 'SwapCached: 0 kB' 'Active: 7223816 kB' 'Inactive: 3507524 kB' 'Active(anon): 6831808 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535588 kB' 'Mapped: 216568 kB' 'Shmem: 6299536 kB' 'KReclaimable: 235904 kB' 'Slab: 824104 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 588200 kB' 'KernelStack: 20304 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8361708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.640 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.641 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87114164 kB' 'MemUsed: 10548520 kB' 'SwapCached: 0 kB' 'Active: 5031452 kB' 'Inactive: 3335448 kB' 'Active(anon): 4873912 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8192736 kB' 'Mapped: 84488 kB' 'AnonPages: 177396 kB' 'Shmem: 4699748 kB' 'KernelStack: 10776 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126888 kB' 'Slab: 392300 kB' 'SReclaimable: 126888 kB' 'SUnreclaim: 265412 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.642 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.643 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.643 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.643 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.643 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.643 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.643 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.643 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.643 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.643 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.643 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.643 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.643 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.669 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88363568 kB' 'MemUsed: 5354900 kB' 'SwapCached: 0 kB' 'Active: 2191884 kB' 'Inactive: 172076 kB' 'Active(anon): 1957416 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172076 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2006348 kB' 'Mapped: 132080 kB' 'AnonPages: 357632 kB' 'Shmem: 1599804 kB' 'KernelStack: 9720 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109016 kB' 'Slab: 431804 kB' 'SReclaimable: 109016 kB' 'SUnreclaim: 322788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.670 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.671 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.672 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.672 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.672 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:08.672 node0=512 expecting 512 00:03:08.672 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.672 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.672 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.672 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:08.672 node1=512 expecting 512 00:03:08.672 22:19:32 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:08.672 00:03:08.672 real 0m2.905s 00:03:08.672 user 0m1.221s 00:03:08.672 sys 0m1.749s 00:03:08.672 22:19:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:08.672 22:19:32 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:08.672 ************************************ 00:03:08.672 END TEST even_2G_alloc 00:03:08.672 ************************************ 00:03:08.931 22:19:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:08.931 22:19:32 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:08.931 22:19:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.931 22:19:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.931 22:19:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:08.931 ************************************ 00:03:08.931 START TEST odd_alloc 00:03:08.931 ************************************ 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:08.931 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:08.932 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.932 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:08.932 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:08.932 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:08.932 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.932 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:08.932 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:08.932 22:19:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:08.932 22:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.932 22:19:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.468 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.468 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:11.468 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175477048 kB' 'MemAvailable: 178350080 kB' 'Buffers: 3896 kB' 'Cached: 10195280 kB' 'SwapCached: 0 kB' 'Active: 7224964 kB' 'Inactive: 3507524 kB' 'Active(anon): 6832956 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536124 kB' 'Mapped: 216656 kB' 'Shmem: 6299644 kB' 'KReclaimable: 235904 kB' 'Slab: 823924 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 588020 kB' 'KernelStack: 20432 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8362184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.468 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.469 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175478608 kB' 'MemAvailable: 178351640 kB' 'Buffers: 3896 kB' 'Cached: 10195284 kB' 'SwapCached: 0 kB' 'Active: 7224552 kB' 'Inactive: 3507524 kB' 'Active(anon): 6832544 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536184 kB' 'Mapped: 216580 kB' 'Shmem: 6299648 kB' 'KReclaimable: 235904 kB' 'Slab: 823912 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 588008 kB' 'KernelStack: 20544 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8362204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.470 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.471 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175477404 kB' 'MemAvailable: 178350436 kB' 'Buffers: 3896 kB' 'Cached: 10195300 kB' 'SwapCached: 0 kB' 'Active: 7224892 kB' 'Inactive: 3507524 kB' 'Active(anon): 6832884 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536484 kB' 'Mapped: 216580 kB' 'Shmem: 6299664 kB' 'KReclaimable: 235904 kB' 'Slab: 823912 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 588008 kB' 'KernelStack: 20656 kB' 'PageTables: 9112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8362224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.472 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.473 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:11.474 nr_hugepages=1025 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.474 resv_hugepages=0 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.474 surplus_hugepages=0 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.474 anon_hugepages=0 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175475896 kB' 'MemAvailable: 178348928 kB' 'Buffers: 3896 kB' 'Cached: 10195320 kB' 'SwapCached: 0 kB' 'Active: 7225252 kB' 'Inactive: 3507524 kB' 'Active(anon): 6833244 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537340 kB' 'Mapped: 216580 kB' 'Shmem: 6299684 kB' 'KReclaimable: 235904 kB' 'Slab: 823912 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 588008 kB' 'KernelStack: 20736 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8361996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315596 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.474 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.475 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87118088 kB' 'MemUsed: 10544596 kB' 'SwapCached: 0 kB' 'Active: 5033520 kB' 'Inactive: 3335448 kB' 'Active(anon): 4875980 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8192868 kB' 'Mapped: 84480 kB' 'AnonPages: 179372 kB' 'Shmem: 4699880 kB' 'KernelStack: 10888 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126888 kB' 'Slab: 392232 kB' 'SReclaimable: 126888 kB' 'SUnreclaim: 265344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.476 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88358828 kB' 'MemUsed: 5359640 kB' 'SwapCached: 0 kB' 'Active: 2190992 kB' 'Inactive: 172076 kB' 'Active(anon): 1956524 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172076 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2006348 kB' 'Mapped: 132092 kB' 'AnonPages: 356696 kB' 'Shmem: 1599804 kB' 'KernelStack: 9560 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109016 kB' 'Slab: 431680 kB' 'SReclaimable: 109016 kB' 'SUnreclaim: 322664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.477 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.478 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:11.479 node0=512 expecting 513 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:11.479 node1=513 expecting 512 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:11.479 00:03:11.479 real 0m2.790s 00:03:11.479 user 0m1.058s 00:03:11.479 sys 0m1.717s 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.479 22:19:35 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:11.479 ************************************ 00:03:11.479 END TEST odd_alloc 00:03:11.479 ************************************ 00:03:11.738 22:19:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:11.738 22:19:35 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:11.738 22:19:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.738 22:19:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.738 22:19:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:11.738 ************************************ 00:03:11.738 START TEST custom_alloc 00:03:11.738 ************************************ 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.738 22:19:35 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.279 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:14.279 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:14.279 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174454644 kB' 'MemAvailable: 177327676 kB' 'Buffers: 3896 kB' 'Cached: 10195428 kB' 'SwapCached: 0 kB' 'Active: 7225884 kB' 'Inactive: 3507524 kB' 'Active(anon): 6833876 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536840 kB' 'Mapped: 216852 kB' 'Shmem: 6299792 kB' 'KReclaimable: 235904 kB' 'Slab: 823144 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 587240 kB' 'KernelStack: 20448 kB' 'PageTables: 8940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8360104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.279 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.280 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174454748 kB' 'MemAvailable: 177327780 kB' 'Buffers: 3896 kB' 'Cached: 10195432 kB' 'SwapCached: 0 kB' 'Active: 7225288 kB' 'Inactive: 3507524 kB' 'Active(anon): 6833280 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536200 kB' 'Mapped: 216668 kB' 'Shmem: 6299796 kB' 'KReclaimable: 235904 kB' 'Slab: 823144 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 587240 kB' 'KernelStack: 20448 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8360124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.281 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.282 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174454868 kB' 'MemAvailable: 177327900 kB' 'Buffers: 3896 kB' 'Cached: 10195448 kB' 'SwapCached: 0 kB' 'Active: 7224952 kB' 'Inactive: 3507524 kB' 'Active(anon): 6832944 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536368 kB' 'Mapped: 216592 kB' 'Shmem: 6299812 kB' 'KReclaimable: 235904 kB' 'Slab: 823128 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 587224 kB' 'KernelStack: 20448 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8360144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.283 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.284 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:14.285 nr_hugepages=1536 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.285 resv_hugepages=0 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.285 surplus_hugepages=0 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.285 anon_hugepages=0 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174455624 kB' 'MemAvailable: 177328656 kB' 'Buffers: 3896 kB' 'Cached: 10195472 kB' 'SwapCached: 0 kB' 'Active: 7224980 kB' 'Inactive: 3507524 kB' 'Active(anon): 6832972 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536368 kB' 'Mapped: 216592 kB' 'Shmem: 6299836 kB' 'KReclaimable: 235904 kB' 'Slab: 823128 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 587224 kB' 'KernelStack: 20448 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8360164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.285 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.286 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87126488 kB' 'MemUsed: 10536196 kB' 'SwapCached: 0 kB' 'Active: 5032156 kB' 'Inactive: 3335448 kB' 'Active(anon): 4874616 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8192996 kB' 'Mapped: 84492 kB' 'AnonPages: 177800 kB' 'Shmem: 4700008 kB' 'KernelStack: 10856 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126888 kB' 'Slab: 391568 kB' 'SReclaimable: 126888 kB' 'SUnreclaim: 264680 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.287 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 87329520 kB' 'MemUsed: 6388948 kB' 'SwapCached: 0 kB' 'Active: 2192852 kB' 'Inactive: 172076 kB' 'Active(anon): 1958384 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172076 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2006392 kB' 'Mapped: 132100 kB' 'AnonPages: 358572 kB' 'Shmem: 1599848 kB' 'KernelStack: 9592 kB' 'PageTables: 4540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109016 kB' 'Slab: 431560 kB' 'SReclaimable: 109016 kB' 'SUnreclaim: 322544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.288 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.289 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:14.290 node0=512 expecting 512 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:14.290 node1=1024 expecting 1024 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:14.290 00:03:14.290 real 0m2.493s 00:03:14.290 user 0m0.932s 00:03:14.290 sys 0m1.546s 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.290 22:19:37 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:14.290 ************************************ 00:03:14.290 END TEST custom_alloc 00:03:14.290 ************************************ 00:03:14.290 22:19:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:14.290 22:19:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:14.290 22:19:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.290 22:19:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.290 22:19:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:14.290 ************************************ 00:03:14.290 START TEST no_shrink_alloc 00:03:14.290 ************************************ 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.290 22:19:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.843 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.843 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.843 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175499908 kB' 'MemAvailable: 178372940 kB' 'Buffers: 3896 kB' 'Cached: 10195584 kB' 'SwapCached: 0 kB' 'Active: 7225312 kB' 'Inactive: 3507524 kB' 'Active(anon): 6833304 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536944 kB' 'Mapped: 216632 kB' 'Shmem: 6299948 kB' 'KReclaimable: 235904 kB' 'Slab: 822640 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 586736 kB' 'KernelStack: 20400 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8360488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315500 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.843 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.844 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175500040 kB' 'MemAvailable: 178373072 kB' 'Buffers: 3896 kB' 'Cached: 10195588 kB' 'SwapCached: 0 kB' 'Active: 7225812 kB' 'Inactive: 3507524 kB' 'Active(anon): 6833804 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537448 kB' 'Mapped: 216604 kB' 'Shmem: 6299952 kB' 'KReclaimable: 235904 kB' 'Slab: 822676 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 586772 kB' 'KernelStack: 20448 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8360504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.845 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175500304 kB' 'MemAvailable: 178373336 kB' 'Buffers: 3896 kB' 'Cached: 10195604 kB' 'SwapCached: 0 kB' 'Active: 7225812 kB' 'Inactive: 3507524 kB' 'Active(anon): 6833804 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537452 kB' 'Mapped: 216604 kB' 'Shmem: 6299968 kB' 'KReclaimable: 235904 kB' 'Slab: 822676 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 586772 kB' 'KernelStack: 20448 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8360528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.846 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.847 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.848 nr_hugepages=1024 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.848 resv_hugepages=0 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.848 surplus_hugepages=0 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.848 anon_hugepages=0 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175500408 kB' 'MemAvailable: 178373440 kB' 'Buffers: 3896 kB' 'Cached: 10195644 kB' 'SwapCached: 0 kB' 'Active: 7225524 kB' 'Inactive: 3507524 kB' 'Active(anon): 6833516 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537096 kB' 'Mapped: 216604 kB' 'Shmem: 6300008 kB' 'KReclaimable: 235904 kB' 'Slab: 822676 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 586772 kB' 'KernelStack: 20448 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8360552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.848 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.849 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.850 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.850 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86070236 kB' 'MemUsed: 11592448 kB' 'SwapCached: 0 kB' 'Active: 5032280 kB' 'Inactive: 3335448 kB' 'Active(anon): 4874740 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8193124 kB' 'Mapped: 84488 kB' 'AnonPages: 177780 kB' 'Shmem: 4700136 kB' 'KernelStack: 10856 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126888 kB' 'Slab: 391340 kB' 'SReclaimable: 126888 kB' 'SUnreclaim: 264452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.124 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:17.125 node0=1024 expecting 1024 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:17.125 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:17.126 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:17.126 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.126 22:19:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.671 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.671 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.671 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.671 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:19.671 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:19.671 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:19.671 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.671 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.671 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:19.671 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:19.671 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:19.671 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175523412 kB' 'MemAvailable: 178396444 kB' 'Buffers: 3896 kB' 'Cached: 10195704 kB' 'SwapCached: 0 kB' 'Active: 7227284 kB' 'Inactive: 3507524 kB' 'Active(anon): 6835276 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538032 kB' 'Mapped: 216688 kB' 'Shmem: 6300068 kB' 'KReclaimable: 235904 kB' 'Slab: 823740 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 587836 kB' 'KernelStack: 20448 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8361196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315484 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.672 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175522656 kB' 'MemAvailable: 178395688 kB' 'Buffers: 3896 kB' 'Cached: 10195708 kB' 'SwapCached: 0 kB' 'Active: 7226976 kB' 'Inactive: 3507524 kB' 'Active(anon): 6834968 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537792 kB' 'Mapped: 216688 kB' 'Shmem: 6300072 kB' 'KReclaimable: 235904 kB' 'Slab: 823740 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 587836 kB' 'KernelStack: 20448 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8361212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315452 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.673 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.674 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175522728 kB' 'MemAvailable: 178395760 kB' 'Buffers: 3896 kB' 'Cached: 10195708 kB' 'SwapCached: 0 kB' 'Active: 7226480 kB' 'Inactive: 3507524 kB' 'Active(anon): 6834472 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537768 kB' 'Mapped: 216612 kB' 'Shmem: 6300072 kB' 'KReclaimable: 235904 kB' 'Slab: 823708 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 587804 kB' 'KernelStack: 20448 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8361236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315452 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.675 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.676 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:19.677 nr_hugepages=1024 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.677 resv_hugepages=0 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.677 surplus_hugepages=0 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.677 anon_hugepages=0 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175522728 kB' 'MemAvailable: 178395760 kB' 'Buffers: 3896 kB' 'Cached: 10195768 kB' 'SwapCached: 0 kB' 'Active: 7226532 kB' 'Inactive: 3507524 kB' 'Active(anon): 6834524 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537712 kB' 'Mapped: 216612 kB' 'Shmem: 6300132 kB' 'KReclaimable: 235904 kB' 'Slab: 823708 kB' 'SReclaimable: 235904 kB' 'SUnreclaim: 587804 kB' 'KernelStack: 20448 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8361256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315452 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3042260 kB' 'DirectMap2M: 16560128 kB' 'DirectMap1G: 182452224 kB' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.677 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.678 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86096200 kB' 'MemUsed: 11566484 kB' 'SwapCached: 0 kB' 'Active: 5032940 kB' 'Inactive: 3335448 kB' 'Active(anon): 4875400 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335448 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8193228 kB' 'Mapped: 84480 kB' 'AnonPages: 178380 kB' 'Shmem: 4700240 kB' 'KernelStack: 10872 kB' 'PageTables: 4444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 126888 kB' 'Slab: 392156 kB' 'SReclaimable: 126888 kB' 'SUnreclaim: 265268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.679 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.680 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:19.681 node0=1024 expecting 1024 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:19.681 00:03:19.681 real 0m5.408s 00:03:19.681 user 0m2.178s 00:03:19.681 sys 0m3.297s 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.681 22:19:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:19.681 ************************************ 00:03:19.681 END TEST no_shrink_alloc 00:03:19.681 ************************************ 00:03:19.681 22:19:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:19.681 22:19:43 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:19.681 00:03:19.681 real 0m20.327s 00:03:19.681 user 0m7.723s 00:03:19.681 sys 0m11.983s 00:03:19.681 22:19:43 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.681 22:19:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:19.681 ************************************ 00:03:19.681 END TEST hugepages 00:03:19.681 ************************************ 00:03:19.681 22:19:43 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:19.681 22:19:43 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:19.681 22:19:43 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.681 22:19:43 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.681 22:19:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:19.681 ************************************ 00:03:19.681 START TEST driver 00:03:19.681 ************************************ 00:03:19.681 22:19:43 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:19.940 * Looking for test storage... 00:03:19.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:19.940 22:19:43 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:19.940 22:19:43 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.940 22:19:43 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.133 22:19:47 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:24.133 22:19:47 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.133 22:19:47 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.133 22:19:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:24.133 ************************************ 00:03:24.133 START TEST guess_driver 00:03:24.133 ************************************ 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:24.133 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:24.133 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:24.133 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:24.133 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:24.133 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:24.133 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:24.133 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:24.133 Looking for driver=vfio-pci 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.133 22:19:47 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.663 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.230 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.230 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.230 22:19:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.230 22:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:27.230 22:19:51 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:27.230 22:19:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.230 22:19:51 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.425 00:03:31.425 real 0m7.408s 00:03:31.425 user 0m2.112s 00:03:31.425 sys 0m3.778s 00:03:31.425 22:19:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.425 22:19:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:31.425 ************************************ 00:03:31.425 END TEST guess_driver 00:03:31.425 ************************************ 00:03:31.425 22:19:54 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:31.425 00:03:31.425 real 0m11.321s 00:03:31.425 user 0m3.220s 00:03:31.425 sys 0m5.759s 00:03:31.425 22:19:54 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.425 22:19:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:31.425 ************************************ 00:03:31.425 END TEST driver 00:03:31.425 ************************************ 00:03:31.425 22:19:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:31.425 22:19:54 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:31.425 22:19:54 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.425 22:19:54 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.425 22:19:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:31.425 ************************************ 00:03:31.425 START TEST devices 00:03:31.425 ************************************ 00:03:31.425 22:19:54 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:31.425 * Looking for test storage... 00:03:31.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:31.425 22:19:55 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:31.425 22:19:55 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:31.425 22:19:55 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.425 22:19:55 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:34.718 22:19:58 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:34.718 22:19:58 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:34.718 22:19:58 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:34.718 22:19:58 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:34.718 22:19:58 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:34.718 22:19:58 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:34.718 22:19:58 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:34.718 22:19:58 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:34.718 22:19:58 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:34.718 22:19:58 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:34.718 No valid GPT data, bailing 00:03:34.718 22:19:58 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:34.718 22:19:58 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:34.718 22:19:58 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:34.718 22:19:58 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:34.718 22:19:58 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:34.718 22:19:58 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:34.718 22:19:58 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:34.718 22:19:58 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.718 22:19:58 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.718 22:19:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:34.718 ************************************ 00:03:34.718 START TEST nvme_mount 00:03:34.718 ************************************ 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:34.718 22:19:58 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:35.287 Creating new GPT entries in memory. 00:03:35.287 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:35.287 other utilities. 00:03:35.287 22:19:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:35.287 22:19:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.287 22:19:59 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:35.287 22:19:59 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:35.287 22:19:59 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:36.667 Creating new GPT entries in memory. 00:03:36.667 The operation has completed successfully. 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3998192 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.667 22:20:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:39.206 22:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.206 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:39.206 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:39.206 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.206 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:39.206 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.206 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:39.206 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.206 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.206 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:39.206 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:39.206 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:39.206 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:39.206 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:39.525 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:39.525 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:39.525 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:39.525 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.525 22:20:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.059 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.060 22:20:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.319 22:20:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.851 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.852 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:44.852 00:03:44.852 real 0m10.618s 00:03:44.852 user 0m3.075s 00:03:44.852 sys 0m5.365s 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:44.852 22:20:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:44.852 ************************************ 00:03:44.852 END TEST nvme_mount 00:03:44.852 ************************************ 00:03:45.110 22:20:08 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:45.110 22:20:08 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:45.110 22:20:08 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.110 22:20:08 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.110 22:20:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:45.110 ************************************ 00:03:45.110 START TEST dm_mount 00:03:45.110 ************************************ 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:45.110 22:20:08 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:46.044 Creating new GPT entries in memory. 00:03:46.044 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:46.044 other utilities. 00:03:46.044 22:20:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:46.044 22:20:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.044 22:20:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:46.044 22:20:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:46.044 22:20:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:46.979 Creating new GPT entries in memory. 00:03:46.979 The operation has completed successfully. 00:03:46.979 22:20:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:46.979 22:20:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.979 22:20:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:46.979 22:20:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:46.979 22:20:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:48.360 The operation has completed successfully. 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 4002226 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.360 22:20:11 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.360 22:20:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.896 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.897 22:20:14 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.802 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.803 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:53.062 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:53.062 00:03:53.062 real 0m8.075s 00:03:53.062 user 0m1.758s 00:03:53.062 sys 0m3.227s 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.062 22:20:16 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:53.062 ************************************ 00:03:53.062 END TEST dm_mount 00:03:53.062 ************************************ 00:03:53.062 22:20:16 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:53.062 22:20:16 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:53.062 22:20:16 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:53.062 22:20:16 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:53.062 22:20:16 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.062 22:20:16 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:53.062 22:20:16 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:53.062 22:20:16 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:53.322 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:53.322 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:53.322 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:53.322 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:53.322 22:20:17 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:53.322 22:20:17 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:53.322 22:20:17 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:53.322 22:20:17 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.322 22:20:17 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:53.322 22:20:17 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:53.322 22:20:17 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:53.322 00:03:53.322 real 0m22.321s 00:03:53.322 user 0m6.077s 00:03:53.322 sys 0m10.851s 00:03:53.322 22:20:17 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.322 22:20:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:53.322 ************************************ 00:03:53.322 END TEST devices 00:03:53.322 ************************************ 00:03:53.580 22:20:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:53.580 00:03:53.580 real 1m13.353s 00:03:53.580 user 0m23.402s 00:03:53.580 sys 0m40.159s 00:03:53.580 22:20:17 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.580 22:20:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:53.580 ************************************ 00:03:53.580 END TEST setup.sh 00:03:53.580 ************************************ 00:03:53.580 22:20:17 -- common/autotest_common.sh@1142 -- # return 0 00:03:53.580 22:20:17 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:56.115 Hugepages 00:03:56.115 node hugesize free / total 00:03:56.115 node0 1048576kB 0 / 0 00:03:56.115 node0 2048kB 2048 / 2048 00:03:56.115 node1 1048576kB 0 / 0 00:03:56.115 node1 2048kB 0 / 0 00:03:56.115 00:03:56.115 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:56.115 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:56.115 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:56.115 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:56.115 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:56.115 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:56.115 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:56.115 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:56.115 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:56.115 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:56.115 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:56.115 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:56.115 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:56.115 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:56.115 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:56.115 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:56.115 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:56.115 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:56.115 22:20:19 -- spdk/autotest.sh@130 -- # uname -s 00:03:56.115 22:20:19 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:56.115 22:20:19 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:56.115 22:20:19 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.658 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:58.658 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:58.658 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:58.658 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:58.658 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:58.658 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:58.658 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:58.658 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:58.658 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:58.658 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:58.658 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:58.917 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:58.918 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:58.918 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:58.918 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:58.918 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:59.486 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:59.747 22:20:23 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:00.736 22:20:24 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:00.736 22:20:24 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:00.736 22:20:24 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:00.736 22:20:24 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:00.736 22:20:24 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:00.736 22:20:24 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:00.736 22:20:24 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.736 22:20:24 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:00.736 22:20:24 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:00.736 22:20:24 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:00.736 22:20:24 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:00.736 22:20:24 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.274 Waiting for block devices as requested 00:04:03.274 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:03.274 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:03.274 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:03.274 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:03.274 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:03.274 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:03.274 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:03.274 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:03.534 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:03.534 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:03.534 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:03.534 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:03.810 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:03.810 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:03.810 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:04.071 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:04.071 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:04.071 22:20:27 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:04.071 22:20:27 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:04.071 22:20:27 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:04.071 22:20:27 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:04.071 22:20:27 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:04.071 22:20:27 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:04.071 22:20:27 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:04.071 22:20:27 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:04.071 22:20:27 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:04.071 22:20:27 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:04.071 22:20:27 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:04.071 22:20:27 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:04.071 22:20:27 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:04.071 22:20:27 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:04.071 22:20:27 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:04.071 22:20:27 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:04.071 22:20:27 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:04.071 22:20:27 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:04.071 22:20:27 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:04.071 22:20:27 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:04.071 22:20:27 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:04.071 22:20:27 -- common/autotest_common.sh@1557 -- # continue 00:04:04.071 22:20:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:04.071 22:20:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:04.071 22:20:27 -- common/autotest_common.sh@10 -- # set +x 00:04:04.071 22:20:28 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:04.071 22:20:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:04.071 22:20:28 -- common/autotest_common.sh@10 -- # set +x 00:04:04.071 22:20:28 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.607 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:06.607 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:06.607 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:06.607 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:06.866 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:06.866 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:06.866 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:06.866 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:06.866 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:06.866 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:06.866 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:06.866 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:06.866 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:06.866 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:06.866 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:06.866 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:07.804 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:07.804 22:20:31 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:07.804 22:20:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:07.804 22:20:31 -- common/autotest_common.sh@10 -- # set +x 00:04:07.804 22:20:31 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:07.804 22:20:31 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:07.804 22:20:31 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:07.804 22:20:31 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:07.804 22:20:31 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:07.804 22:20:31 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:07.804 22:20:31 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:07.804 22:20:31 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:07.804 22:20:31 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:07.804 22:20:31 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:07.804 22:20:31 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:07.804 22:20:31 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:07.804 22:20:31 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:07.804 22:20:31 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:07.804 22:20:31 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:07.804 22:20:31 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:07.804 22:20:31 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:07.804 22:20:31 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:07.804 22:20:31 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:07.804 22:20:31 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:07.804 22:20:31 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.804 22:20:31 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=4010846 00:04:07.804 22:20:31 -- common/autotest_common.sh@1598 -- # waitforlisten 4010846 00:04:07.804 22:20:31 -- common/autotest_common.sh@829 -- # '[' -z 4010846 ']' 00:04:07.804 22:20:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.804 22:20:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:07.804 22:20:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.804 22:20:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:07.804 22:20:31 -- common/autotest_common.sh@10 -- # set +x 00:04:07.804 [2024-07-15 22:20:31.753935] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:07.804 [2024-07-15 22:20:31.753980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4010846 ] 00:04:08.064 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.064 [2024-07-15 22:20:31.804676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.064 [2024-07-15 22:20:31.884723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.632 22:20:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:08.632 22:20:32 -- common/autotest_common.sh@862 -- # return 0 00:04:08.632 22:20:32 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:08.632 22:20:32 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:08.632 22:20:32 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:11.917 nvme0n1 00:04:11.917 22:20:35 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:11.917 [2024-07-15 22:20:35.720290] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:11.917 request: 00:04:11.917 { 00:04:11.917 "nvme_ctrlr_name": "nvme0", 00:04:11.917 "password": "test", 00:04:11.917 "method": "bdev_nvme_opal_revert", 00:04:11.917 "req_id": 1 00:04:11.917 } 00:04:11.917 Got JSON-RPC error response 00:04:11.917 response: 00:04:11.917 { 00:04:11.917 "code": -32602, 00:04:11.917 "message": "Invalid parameters" 00:04:11.917 } 00:04:11.917 22:20:35 -- common/autotest_common.sh@1604 -- # true 00:04:11.917 22:20:35 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:11.917 22:20:35 -- common/autotest_common.sh@1608 -- # killprocess 4010846 00:04:11.917 22:20:35 -- common/autotest_common.sh@948 -- # '[' -z 4010846 ']' 00:04:11.917 22:20:35 -- common/autotest_common.sh@952 -- # kill -0 4010846 00:04:11.917 22:20:35 -- common/autotest_common.sh@953 -- # uname 00:04:11.917 22:20:35 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:11.917 22:20:35 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4010846 00:04:11.917 22:20:35 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:11.917 22:20:35 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:11.917 22:20:35 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4010846' 00:04:11.917 killing process with pid 4010846 00:04:11.917 22:20:35 -- common/autotest_common.sh@967 -- # kill 4010846 00:04:11.917 22:20:35 -- common/autotest_common.sh@972 -- # wait 4010846 00:04:13.822 22:20:37 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:13.822 22:20:37 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:13.822 22:20:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:13.822 22:20:37 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:13.822 22:20:37 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:13.822 22:20:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.822 22:20:37 -- common/autotest_common.sh@10 -- # set +x 00:04:13.822 22:20:37 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:13.822 22:20:37 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:13.822 22:20:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.822 22:20:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.822 22:20:37 -- common/autotest_common.sh@10 -- # set +x 00:04:13.822 ************************************ 00:04:13.822 START TEST env 00:04:13.822 ************************************ 00:04:13.822 22:20:37 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:13.822 * Looking for test storage... 00:04:13.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:13.822 22:20:37 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:13.822 22:20:37 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.822 22:20:37 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.822 22:20:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.822 ************************************ 00:04:13.822 START TEST env_memory 00:04:13.822 ************************************ 00:04:13.822 22:20:37 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:13.822 00:04:13.822 00:04:13.822 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.822 http://cunit.sourceforge.net/ 00:04:13.822 00:04:13.822 00:04:13.822 Suite: memory 00:04:13.822 Test: alloc and free memory map ...[2024-07-15 22:20:37.530564] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:13.822 passed 00:04:13.822 Test: mem map translation ...[2024-07-15 22:20:37.549726] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:13.822 [2024-07-15 22:20:37.549742] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:13.822 [2024-07-15 22:20:37.549778] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:13.822 [2024-07-15 22:20:37.549784] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:13.822 passed 00:04:13.822 Test: mem map registration ...[2024-07-15 22:20:37.588494] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:13.822 [2024-07-15 22:20:37.588513] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:13.822 passed 00:04:13.822 Test: mem map adjacent registrations ...passed 00:04:13.822 00:04:13.822 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.822 suites 1 1 n/a 0 0 00:04:13.822 tests 4 4 4 0 0 00:04:13.822 asserts 152 152 152 0 n/a 00:04:13.822 00:04:13.822 Elapsed time = 0.139 seconds 00:04:13.822 00:04:13.822 real 0m0.151s 00:04:13.822 user 0m0.141s 00:04:13.822 sys 0m0.010s 00:04:13.822 22:20:37 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.822 22:20:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:13.822 ************************************ 00:04:13.822 END TEST env_memory 00:04:13.822 ************************************ 00:04:13.822 22:20:37 env -- common/autotest_common.sh@1142 -- # return 0 00:04:13.822 22:20:37 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:13.822 22:20:37 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.822 22:20:37 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.822 22:20:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.822 ************************************ 00:04:13.822 START TEST env_vtophys 00:04:13.822 ************************************ 00:04:13.822 22:20:37 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:13.822 EAL: lib.eal log level changed from notice to debug 00:04:13.822 EAL: Detected lcore 0 as core 0 on socket 0 00:04:13.822 EAL: Detected lcore 1 as core 1 on socket 0 00:04:13.822 EAL: Detected lcore 2 as core 2 on socket 0 00:04:13.822 EAL: Detected lcore 3 as core 3 on socket 0 00:04:13.822 EAL: Detected lcore 4 as core 4 on socket 0 00:04:13.822 EAL: Detected lcore 5 as core 5 on socket 0 00:04:13.822 EAL: Detected lcore 6 as core 6 on socket 0 00:04:13.822 EAL: Detected lcore 7 as core 8 on socket 0 00:04:13.822 EAL: Detected lcore 8 as core 9 on socket 0 00:04:13.822 EAL: Detected lcore 9 as core 10 on socket 0 00:04:13.822 EAL: Detected lcore 10 as core 11 on socket 0 00:04:13.822 EAL: Detected lcore 11 as core 12 on socket 0 00:04:13.822 EAL: Detected lcore 12 as core 13 on socket 0 00:04:13.822 EAL: Detected lcore 13 as core 16 on socket 0 00:04:13.822 EAL: Detected lcore 14 as core 17 on socket 0 00:04:13.822 EAL: Detected lcore 15 as core 18 on socket 0 00:04:13.822 EAL: Detected lcore 16 as core 19 on socket 0 00:04:13.823 EAL: Detected lcore 17 as core 20 on socket 0 00:04:13.823 EAL: Detected lcore 18 as core 21 on socket 0 00:04:13.823 EAL: Detected lcore 19 as core 25 on socket 0 00:04:13.823 EAL: Detected lcore 20 as core 26 on socket 0 00:04:13.823 EAL: Detected lcore 21 as core 27 on socket 0 00:04:13.823 EAL: Detected lcore 22 as core 28 on socket 0 00:04:13.823 EAL: Detected lcore 23 as core 29 on socket 0 00:04:13.823 EAL: Detected lcore 24 as core 0 on socket 1 00:04:13.823 EAL: Detected lcore 25 as core 1 on socket 1 00:04:13.823 EAL: Detected lcore 26 as core 2 on socket 1 00:04:13.823 EAL: Detected lcore 27 as core 3 on socket 1 00:04:13.823 EAL: Detected lcore 28 as core 4 on socket 1 00:04:13.823 EAL: Detected lcore 29 as core 5 on socket 1 00:04:13.823 EAL: Detected lcore 30 as core 6 on socket 1 00:04:13.823 EAL: Detected lcore 31 as core 9 on socket 1 00:04:13.823 EAL: Detected lcore 32 as core 10 on socket 1 00:04:13.823 EAL: Detected lcore 33 as core 11 on socket 1 00:04:13.823 EAL: Detected lcore 34 as core 12 on socket 1 00:04:13.823 EAL: Detected lcore 35 as core 13 on socket 1 00:04:13.823 EAL: Detected lcore 36 as core 16 on socket 1 00:04:13.823 EAL: Detected lcore 37 as core 17 on socket 1 00:04:13.823 EAL: Detected lcore 38 as core 18 on socket 1 00:04:13.823 EAL: Detected lcore 39 as core 19 on socket 1 00:04:13.823 EAL: Detected lcore 40 as core 20 on socket 1 00:04:13.823 EAL: Detected lcore 41 as core 21 on socket 1 00:04:13.823 EAL: Detected lcore 42 as core 24 on socket 1 00:04:13.823 EAL: Detected lcore 43 as core 25 on socket 1 00:04:13.823 EAL: Detected lcore 44 as core 26 on socket 1 00:04:13.823 EAL: Detected lcore 45 as core 27 on socket 1 00:04:13.823 EAL: Detected lcore 46 as core 28 on socket 1 00:04:13.823 EAL: Detected lcore 47 as core 29 on socket 1 00:04:13.823 EAL: Detected lcore 48 as core 0 on socket 0 00:04:13.823 EAL: Detected lcore 49 as core 1 on socket 0 00:04:13.823 EAL: Detected lcore 50 as core 2 on socket 0 00:04:13.823 EAL: Detected lcore 51 as core 3 on socket 0 00:04:13.823 EAL: Detected lcore 52 as core 4 on socket 0 00:04:13.823 EAL: Detected lcore 53 as core 5 on socket 0 00:04:13.823 EAL: Detected lcore 54 as core 6 on socket 0 00:04:13.823 EAL: Detected lcore 55 as core 8 on socket 0 00:04:13.823 EAL: Detected lcore 56 as core 9 on socket 0 00:04:13.823 EAL: Detected lcore 57 as core 10 on socket 0 00:04:13.823 EAL: Detected lcore 58 as core 11 on socket 0 00:04:13.823 EAL: Detected lcore 59 as core 12 on socket 0 00:04:13.823 EAL: Detected lcore 60 as core 13 on socket 0 00:04:13.823 EAL: Detected lcore 61 as core 16 on socket 0 00:04:13.823 EAL: Detected lcore 62 as core 17 on socket 0 00:04:13.823 EAL: Detected lcore 63 as core 18 on socket 0 00:04:13.823 EAL: Detected lcore 64 as core 19 on socket 0 00:04:13.823 EAL: Detected lcore 65 as core 20 on socket 0 00:04:13.823 EAL: Detected lcore 66 as core 21 on socket 0 00:04:13.823 EAL: Detected lcore 67 as core 25 on socket 0 00:04:13.823 EAL: Detected lcore 68 as core 26 on socket 0 00:04:13.823 EAL: Detected lcore 69 as core 27 on socket 0 00:04:13.823 EAL: Detected lcore 70 as core 28 on socket 0 00:04:13.823 EAL: Detected lcore 71 as core 29 on socket 0 00:04:13.823 EAL: Detected lcore 72 as core 0 on socket 1 00:04:13.823 EAL: Detected lcore 73 as core 1 on socket 1 00:04:13.823 EAL: Detected lcore 74 as core 2 on socket 1 00:04:13.823 EAL: Detected lcore 75 as core 3 on socket 1 00:04:13.823 EAL: Detected lcore 76 as core 4 on socket 1 00:04:13.823 EAL: Detected lcore 77 as core 5 on socket 1 00:04:13.823 EAL: Detected lcore 78 as core 6 on socket 1 00:04:13.823 EAL: Detected lcore 79 as core 9 on socket 1 00:04:13.823 EAL: Detected lcore 80 as core 10 on socket 1 00:04:13.823 EAL: Detected lcore 81 as core 11 on socket 1 00:04:13.823 EAL: Detected lcore 82 as core 12 on socket 1 00:04:13.823 EAL: Detected lcore 83 as core 13 on socket 1 00:04:13.823 EAL: Detected lcore 84 as core 16 on socket 1 00:04:13.823 EAL: Detected lcore 85 as core 17 on socket 1 00:04:13.823 EAL: Detected lcore 86 as core 18 on socket 1 00:04:13.823 EAL: Detected lcore 87 as core 19 on socket 1 00:04:13.823 EAL: Detected lcore 88 as core 20 on socket 1 00:04:13.823 EAL: Detected lcore 89 as core 21 on socket 1 00:04:13.823 EAL: Detected lcore 90 as core 24 on socket 1 00:04:13.823 EAL: Detected lcore 91 as core 25 on socket 1 00:04:13.823 EAL: Detected lcore 92 as core 26 on socket 1 00:04:13.823 EAL: Detected lcore 93 as core 27 on socket 1 00:04:13.823 EAL: Detected lcore 94 as core 28 on socket 1 00:04:13.823 EAL: Detected lcore 95 as core 29 on socket 1 00:04:13.823 EAL: Maximum logical cores by configuration: 128 00:04:13.823 EAL: Detected CPU lcores: 96 00:04:13.823 EAL: Detected NUMA nodes: 2 00:04:13.823 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:13.823 EAL: Detected shared linkage of DPDK 00:04:13.823 EAL: No shared files mode enabled, IPC will be disabled 00:04:13.823 EAL: Bus pci wants IOVA as 'DC' 00:04:13.823 EAL: Buses did not request a specific IOVA mode. 00:04:13.823 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:13.823 EAL: Selected IOVA mode 'VA' 00:04:13.823 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.823 EAL: Probing VFIO support... 00:04:13.823 EAL: IOMMU type 1 (Type 1) is supported 00:04:13.823 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:13.823 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:13.823 EAL: VFIO support initialized 00:04:13.823 EAL: Ask a virtual area of 0x2e000 bytes 00:04:13.823 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:13.823 EAL: Setting up physically contiguous memory... 00:04:13.823 EAL: Setting maximum number of open files to 524288 00:04:13.823 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:13.823 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:13.823 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:13.823 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.823 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:13.823 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.823 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.823 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:13.823 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:13.823 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.823 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:13.823 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.823 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.823 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:13.823 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:13.823 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.823 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:13.823 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.823 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.823 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:13.823 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:13.823 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.823 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:13.823 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.823 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.823 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:13.823 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:13.823 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:13.823 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.823 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:13.823 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:13.823 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.823 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:13.823 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:13.823 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.823 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:13.823 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:13.823 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.823 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:13.823 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:13.823 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.823 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:13.823 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:13.823 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.823 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:13.823 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:13.823 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.823 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:13.823 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:13.823 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.823 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:13.823 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:13.823 EAL: Hugepages will be freed exactly as allocated. 00:04:13.823 EAL: No shared files mode enabled, IPC is disabled 00:04:13.823 EAL: No shared files mode enabled, IPC is disabled 00:04:13.823 EAL: TSC frequency is ~2300000 KHz 00:04:13.823 EAL: Main lcore 0 is ready (tid=7f8ad2f44a00;cpuset=[0]) 00:04:13.823 EAL: Trying to obtain current memory policy. 00:04:13.823 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.823 EAL: Restoring previous memory policy: 0 00:04:13.823 EAL: request: mp_malloc_sync 00:04:13.823 EAL: No shared files mode enabled, IPC is disabled 00:04:13.823 EAL: Heap on socket 0 was expanded by 2MB 00:04:13.823 EAL: No shared files mode enabled, IPC is disabled 00:04:13.824 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:13.824 EAL: Mem event callback 'spdk:(nil)' registered 00:04:13.824 00:04:13.824 00:04:13.824 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.824 http://cunit.sourceforge.net/ 00:04:13.824 00:04:13.824 00:04:13.824 Suite: components_suite 00:04:13.824 Test: vtophys_malloc_test ...passed 00:04:13.824 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:13.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.824 EAL: Restoring previous memory policy: 4 00:04:13.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.824 EAL: request: mp_malloc_sync 00:04:13.824 EAL: No shared files mode enabled, IPC is disabled 00:04:13.824 EAL: Heap on socket 0 was expanded by 4MB 00:04:13.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.824 EAL: request: mp_malloc_sync 00:04:13.824 EAL: No shared files mode enabled, IPC is disabled 00:04:13.824 EAL: Heap on socket 0 was shrunk by 4MB 00:04:13.824 EAL: Trying to obtain current memory policy. 00:04:13.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.824 EAL: Restoring previous memory policy: 4 00:04:13.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.824 EAL: request: mp_malloc_sync 00:04:13.824 EAL: No shared files mode enabled, IPC is disabled 00:04:13.824 EAL: Heap on socket 0 was expanded by 6MB 00:04:13.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.824 EAL: request: mp_malloc_sync 00:04:13.824 EAL: No shared files mode enabled, IPC is disabled 00:04:13.824 EAL: Heap on socket 0 was shrunk by 6MB 00:04:13.824 EAL: Trying to obtain current memory policy. 00:04:13.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.824 EAL: Restoring previous memory policy: 4 00:04:13.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.824 EAL: request: mp_malloc_sync 00:04:13.824 EAL: No shared files mode enabled, IPC is disabled 00:04:13.824 EAL: Heap on socket 0 was expanded by 10MB 00:04:13.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.824 EAL: request: mp_malloc_sync 00:04:13.824 EAL: No shared files mode enabled, IPC is disabled 00:04:13.824 EAL: Heap on socket 0 was shrunk by 10MB 00:04:13.824 EAL: Trying to obtain current memory policy. 00:04:13.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.824 EAL: Restoring previous memory policy: 4 00:04:13.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.824 EAL: request: mp_malloc_sync 00:04:13.824 EAL: No shared files mode enabled, IPC is disabled 00:04:13.824 EAL: Heap on socket 0 was expanded by 18MB 00:04:13.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.824 EAL: request: mp_malloc_sync 00:04:13.824 EAL: No shared files mode enabled, IPC is disabled 00:04:13.824 EAL: Heap on socket 0 was shrunk by 18MB 00:04:13.824 EAL: Trying to obtain current memory policy. 00:04:13.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.824 EAL: Restoring previous memory policy: 4 00:04:13.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.824 EAL: request: mp_malloc_sync 00:04:13.824 EAL: No shared files mode enabled, IPC is disabled 00:04:13.824 EAL: Heap on socket 0 was expanded by 34MB 00:04:14.084 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.084 EAL: request: mp_malloc_sync 00:04:14.084 EAL: No shared files mode enabled, IPC is disabled 00:04:14.084 EAL: Heap on socket 0 was shrunk by 34MB 00:04:14.084 EAL: Trying to obtain current memory policy. 00:04:14.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.084 EAL: Restoring previous memory policy: 4 00:04:14.084 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.084 EAL: request: mp_malloc_sync 00:04:14.084 EAL: No shared files mode enabled, IPC is disabled 00:04:14.084 EAL: Heap on socket 0 was expanded by 66MB 00:04:14.084 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.084 EAL: request: mp_malloc_sync 00:04:14.084 EAL: No shared files mode enabled, IPC is disabled 00:04:14.084 EAL: Heap on socket 0 was shrunk by 66MB 00:04:14.084 EAL: Trying to obtain current memory policy. 00:04:14.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.084 EAL: Restoring previous memory policy: 4 00:04:14.084 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.084 EAL: request: mp_malloc_sync 00:04:14.084 EAL: No shared files mode enabled, IPC is disabled 00:04:14.084 EAL: Heap on socket 0 was expanded by 130MB 00:04:14.084 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.084 EAL: request: mp_malloc_sync 00:04:14.084 EAL: No shared files mode enabled, IPC is disabled 00:04:14.084 EAL: Heap on socket 0 was shrunk by 130MB 00:04:14.084 EAL: Trying to obtain current memory policy. 00:04:14.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.084 EAL: Restoring previous memory policy: 4 00:04:14.084 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.084 EAL: request: mp_malloc_sync 00:04:14.084 EAL: No shared files mode enabled, IPC is disabled 00:04:14.084 EAL: Heap on socket 0 was expanded by 258MB 00:04:14.084 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.084 EAL: request: mp_malloc_sync 00:04:14.084 EAL: No shared files mode enabled, IPC is disabled 00:04:14.084 EAL: Heap on socket 0 was shrunk by 258MB 00:04:14.084 EAL: Trying to obtain current memory policy. 00:04:14.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.343 EAL: Restoring previous memory policy: 4 00:04:14.343 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.343 EAL: request: mp_malloc_sync 00:04:14.343 EAL: No shared files mode enabled, IPC is disabled 00:04:14.343 EAL: Heap on socket 0 was expanded by 514MB 00:04:14.343 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.343 EAL: request: mp_malloc_sync 00:04:14.343 EAL: No shared files mode enabled, IPC is disabled 00:04:14.343 EAL: Heap on socket 0 was shrunk by 514MB 00:04:14.343 EAL: Trying to obtain current memory policy. 00:04:14.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.601 EAL: Restoring previous memory policy: 4 00:04:14.601 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.601 EAL: request: mp_malloc_sync 00:04:14.601 EAL: No shared files mode enabled, IPC is disabled 00:04:14.601 EAL: Heap on socket 0 was expanded by 1026MB 00:04:14.859 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.859 EAL: request: mp_malloc_sync 00:04:14.859 EAL: No shared files mode enabled, IPC is disabled 00:04:14.859 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:14.859 passed 00:04:14.859 00:04:14.860 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.860 suites 1 1 n/a 0 0 00:04:14.860 tests 2 2 2 0 0 00:04:14.860 asserts 497 497 497 0 n/a 00:04:14.860 00:04:14.860 Elapsed time = 0.962 seconds 00:04:14.860 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.860 EAL: request: mp_malloc_sync 00:04:14.860 EAL: No shared files mode enabled, IPC is disabled 00:04:14.860 EAL: Heap on socket 0 was shrunk by 2MB 00:04:14.860 EAL: No shared files mode enabled, IPC is disabled 00:04:14.860 EAL: No shared files mode enabled, IPC is disabled 00:04:14.860 EAL: No shared files mode enabled, IPC is disabled 00:04:14.860 00:04:14.860 real 0m1.067s 00:04:14.860 user 0m0.625s 00:04:14.860 sys 0m0.419s 00:04:14.860 22:20:38 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.860 22:20:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:14.860 ************************************ 00:04:14.860 END TEST env_vtophys 00:04:14.860 ************************************ 00:04:14.860 22:20:38 env -- common/autotest_common.sh@1142 -- # return 0 00:04:14.860 22:20:38 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:14.860 22:20:38 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.860 22:20:38 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.860 22:20:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.860 ************************************ 00:04:14.860 START TEST env_pci 00:04:14.860 ************************************ 00:04:15.119 22:20:38 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:15.119 00:04:15.119 00:04:15.119 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.119 http://cunit.sourceforge.net/ 00:04:15.119 00:04:15.119 00:04:15.119 Suite: pci 00:04:15.119 Test: pci_hook ...[2024-07-15 22:20:38.847034] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4012222 has claimed it 00:04:15.119 EAL: Cannot find device (10000:00:01.0) 00:04:15.119 EAL: Failed to attach device on primary process 00:04:15.119 passed 00:04:15.119 00:04:15.119 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.119 suites 1 1 n/a 0 0 00:04:15.119 tests 1 1 1 0 0 00:04:15.119 asserts 25 25 25 0 n/a 00:04:15.119 00:04:15.119 Elapsed time = 0.027 seconds 00:04:15.119 00:04:15.119 real 0m0.047s 00:04:15.119 user 0m0.009s 00:04:15.119 sys 0m0.037s 00:04:15.119 22:20:38 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.119 22:20:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:15.119 ************************************ 00:04:15.119 END TEST env_pci 00:04:15.119 ************************************ 00:04:15.119 22:20:38 env -- common/autotest_common.sh@1142 -- # return 0 00:04:15.119 22:20:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:15.119 22:20:38 env -- env/env.sh@15 -- # uname 00:04:15.119 22:20:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:15.119 22:20:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:15.119 22:20:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:15.119 22:20:38 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:15.119 22:20:38 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.119 22:20:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.119 ************************************ 00:04:15.119 START TEST env_dpdk_post_init 00:04:15.119 ************************************ 00:04:15.119 22:20:38 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:15.119 EAL: Detected CPU lcores: 96 00:04:15.119 EAL: Detected NUMA nodes: 2 00:04:15.119 EAL: Detected shared linkage of DPDK 00:04:15.119 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.119 EAL: Selected IOVA mode 'VA' 00:04:15.119 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.119 EAL: VFIO support initialized 00:04:15.119 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.119 EAL: Using IOMMU type 1 (Type 1) 00:04:15.119 EAL: Ignore mapping IO port bar(1) 00:04:15.119 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:15.119 EAL: Ignore mapping IO port bar(1) 00:04:15.119 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:15.119 EAL: Ignore mapping IO port bar(1) 00:04:15.119 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:15.119 EAL: Ignore mapping IO port bar(1) 00:04:15.119 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:15.379 EAL: Ignore mapping IO port bar(1) 00:04:15.379 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:15.379 EAL: Ignore mapping IO port bar(1) 00:04:15.379 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:15.379 EAL: Ignore mapping IO port bar(1) 00:04:15.379 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:15.379 EAL: Ignore mapping IO port bar(1) 00:04:15.379 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:15.947 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:15.947 EAL: Ignore mapping IO port bar(1) 00:04:15.947 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:15.947 EAL: Ignore mapping IO port bar(1) 00:04:15.947 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:15.947 EAL: Ignore mapping IO port bar(1) 00:04:15.947 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:15.947 EAL: Ignore mapping IO port bar(1) 00:04:15.947 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:16.206 EAL: Ignore mapping IO port bar(1) 00:04:16.206 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:16.206 EAL: Ignore mapping IO port bar(1) 00:04:16.206 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:16.206 EAL: Ignore mapping IO port bar(1) 00:04:16.206 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:16.206 EAL: Ignore mapping IO port bar(1) 00:04:16.206 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:19.496 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:19.496 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:19.496 Starting DPDK initialization... 00:04:19.496 Starting SPDK post initialization... 00:04:19.496 SPDK NVMe probe 00:04:19.496 Attaching to 0000:5e:00.0 00:04:19.496 Attached to 0000:5e:00.0 00:04:19.496 Cleaning up... 00:04:19.496 00:04:19.496 real 0m4.298s 00:04:19.496 user 0m3.264s 00:04:19.496 sys 0m0.104s 00:04:19.496 22:20:43 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.496 22:20:43 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:19.496 ************************************ 00:04:19.496 END TEST env_dpdk_post_init 00:04:19.496 ************************************ 00:04:19.496 22:20:43 env -- common/autotest_common.sh@1142 -- # return 0 00:04:19.496 22:20:43 env -- env/env.sh@26 -- # uname 00:04:19.496 22:20:43 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:19.496 22:20:43 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:19.496 22:20:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.496 22:20:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.496 22:20:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.496 ************************************ 00:04:19.496 START TEST env_mem_callbacks 00:04:19.496 ************************************ 00:04:19.496 22:20:43 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:19.496 EAL: Detected CPU lcores: 96 00:04:19.496 EAL: Detected NUMA nodes: 2 00:04:19.496 EAL: Detected shared linkage of DPDK 00:04:19.496 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:19.496 EAL: Selected IOVA mode 'VA' 00:04:19.496 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.496 EAL: VFIO support initialized 00:04:19.496 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:19.496 00:04:19.497 00:04:19.497 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.497 http://cunit.sourceforge.net/ 00:04:19.497 00:04:19.497 00:04:19.497 Suite: memory 00:04:19.497 Test: test ... 00:04:19.497 register 0x200000200000 2097152 00:04:19.497 malloc 3145728 00:04:19.497 register 0x200000400000 4194304 00:04:19.497 buf 0x200000500000 len 3145728 PASSED 00:04:19.497 malloc 64 00:04:19.497 buf 0x2000004fff40 len 64 PASSED 00:04:19.497 malloc 4194304 00:04:19.497 register 0x200000800000 6291456 00:04:19.497 buf 0x200000a00000 len 4194304 PASSED 00:04:19.497 free 0x200000500000 3145728 00:04:19.497 free 0x2000004fff40 64 00:04:19.497 unregister 0x200000400000 4194304 PASSED 00:04:19.497 free 0x200000a00000 4194304 00:04:19.497 unregister 0x200000800000 6291456 PASSED 00:04:19.497 malloc 8388608 00:04:19.497 register 0x200000400000 10485760 00:04:19.497 buf 0x200000600000 len 8388608 PASSED 00:04:19.497 free 0x200000600000 8388608 00:04:19.497 unregister 0x200000400000 10485760 PASSED 00:04:19.497 passed 00:04:19.497 00:04:19.497 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.497 suites 1 1 n/a 0 0 00:04:19.497 tests 1 1 1 0 0 00:04:19.497 asserts 15 15 15 0 n/a 00:04:19.497 00:04:19.497 Elapsed time = 0.004 seconds 00:04:19.497 00:04:19.497 real 0m0.052s 00:04:19.497 user 0m0.019s 00:04:19.497 sys 0m0.033s 00:04:19.497 22:20:43 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.497 22:20:43 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:19.497 ************************************ 00:04:19.497 END TEST env_mem_callbacks 00:04:19.497 ************************************ 00:04:19.497 22:20:43 env -- common/autotest_common.sh@1142 -- # return 0 00:04:19.497 00:04:19.497 real 0m5.985s 00:04:19.497 user 0m4.196s 00:04:19.497 sys 0m0.859s 00:04:19.497 22:20:43 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.497 22:20:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.497 ************************************ 00:04:19.497 END TEST env 00:04:19.497 ************************************ 00:04:19.497 22:20:43 -- common/autotest_common.sh@1142 -- # return 0 00:04:19.497 22:20:43 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:19.497 22:20:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.497 22:20:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.497 22:20:43 -- common/autotest_common.sh@10 -- # set +x 00:04:19.497 ************************************ 00:04:19.497 START TEST rpc 00:04:19.497 ************************************ 00:04:19.497 22:20:43 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:19.807 * Looking for test storage... 00:04:19.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:19.807 22:20:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4013134 00:04:19.807 22:20:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.807 22:20:43 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:19.807 22:20:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4013134 00:04:19.807 22:20:43 rpc -- common/autotest_common.sh@829 -- # '[' -z 4013134 ']' 00:04:19.807 22:20:43 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.807 22:20:43 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.807 22:20:43 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.807 22:20:43 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.807 22:20:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.807 [2024-07-15 22:20:43.594637] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:19.807 [2024-07-15 22:20:43.594679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013134 ] 00:04:19.807 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.807 [2024-07-15 22:20:43.647507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.807 [2024-07-15 22:20:43.726814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:19.807 [2024-07-15 22:20:43.726848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4013134' to capture a snapshot of events at runtime. 00:04:19.807 [2024-07-15 22:20:43.726856] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:19.807 [2024-07-15 22:20:43.726862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:19.807 [2024-07-15 22:20:43.726867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4013134 for offline analysis/debug. 00:04:19.807 [2024-07-15 22:20:43.726892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.744 22:20:44 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.744 22:20:44 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:20.744 22:20:44 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:20.744 22:20:44 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:20.744 22:20:44 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:20.744 22:20:44 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:20.744 22:20:44 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.744 22:20:44 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.744 22:20:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.744 ************************************ 00:04:20.744 START TEST rpc_integrity 00:04:20.744 ************************************ 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.744 { 00:04:20.744 "name": "Malloc0", 00:04:20.744 "aliases": [ 00:04:20.744 "6c118bad-05fe-46ff-861a-ef90e687d017" 00:04:20.744 ], 00:04:20.744 "product_name": "Malloc disk", 00:04:20.744 "block_size": 512, 00:04:20.744 "num_blocks": 16384, 00:04:20.744 "uuid": "6c118bad-05fe-46ff-861a-ef90e687d017", 00:04:20.744 "assigned_rate_limits": { 00:04:20.744 "rw_ios_per_sec": 0, 00:04:20.744 "rw_mbytes_per_sec": 0, 00:04:20.744 "r_mbytes_per_sec": 0, 00:04:20.744 "w_mbytes_per_sec": 0 00:04:20.744 }, 00:04:20.744 "claimed": false, 00:04:20.744 "zoned": false, 00:04:20.744 "supported_io_types": { 00:04:20.744 "read": true, 00:04:20.744 "write": true, 00:04:20.744 "unmap": true, 00:04:20.744 "flush": true, 00:04:20.744 "reset": true, 00:04:20.744 "nvme_admin": false, 00:04:20.744 "nvme_io": false, 00:04:20.744 "nvme_io_md": false, 00:04:20.744 "write_zeroes": true, 00:04:20.744 "zcopy": true, 00:04:20.744 "get_zone_info": false, 00:04:20.744 "zone_management": false, 00:04:20.744 "zone_append": false, 00:04:20.744 "compare": false, 00:04:20.744 "compare_and_write": false, 00:04:20.744 "abort": true, 00:04:20.744 "seek_hole": false, 00:04:20.744 "seek_data": false, 00:04:20.744 "copy": true, 00:04:20.744 "nvme_iov_md": false 00:04:20.744 }, 00:04:20.744 "memory_domains": [ 00:04:20.744 { 00:04:20.744 "dma_device_id": "system", 00:04:20.744 "dma_device_type": 1 00:04:20.744 }, 00:04:20.744 { 00:04:20.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.744 "dma_device_type": 2 00:04:20.744 } 00:04:20.744 ], 00:04:20.744 "driver_specific": {} 00:04:20.744 } 00:04:20.744 ]' 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.744 [2024-07-15 22:20:44.531851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:20.744 [2024-07-15 22:20:44.531880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.744 [2024-07-15 22:20:44.531891] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b322d0 00:04:20.744 [2024-07-15 22:20:44.531897] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.744 [2024-07-15 22:20:44.532939] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.744 [2024-07-15 22:20:44.532959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.744 Passthru0 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.744 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.744 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:20.744 { 00:04:20.744 "name": "Malloc0", 00:04:20.744 "aliases": [ 00:04:20.744 "6c118bad-05fe-46ff-861a-ef90e687d017" 00:04:20.744 ], 00:04:20.744 "product_name": "Malloc disk", 00:04:20.744 "block_size": 512, 00:04:20.744 "num_blocks": 16384, 00:04:20.744 "uuid": "6c118bad-05fe-46ff-861a-ef90e687d017", 00:04:20.744 "assigned_rate_limits": { 00:04:20.744 "rw_ios_per_sec": 0, 00:04:20.744 "rw_mbytes_per_sec": 0, 00:04:20.744 "r_mbytes_per_sec": 0, 00:04:20.744 "w_mbytes_per_sec": 0 00:04:20.744 }, 00:04:20.744 "claimed": true, 00:04:20.744 "claim_type": "exclusive_write", 00:04:20.744 "zoned": false, 00:04:20.744 "supported_io_types": { 00:04:20.744 "read": true, 00:04:20.744 "write": true, 00:04:20.744 "unmap": true, 00:04:20.744 "flush": true, 00:04:20.744 "reset": true, 00:04:20.744 "nvme_admin": false, 00:04:20.744 "nvme_io": false, 00:04:20.744 "nvme_io_md": false, 00:04:20.744 "write_zeroes": true, 00:04:20.744 "zcopy": true, 00:04:20.744 "get_zone_info": false, 00:04:20.744 "zone_management": false, 00:04:20.744 "zone_append": false, 00:04:20.744 "compare": false, 00:04:20.744 "compare_and_write": false, 00:04:20.744 "abort": true, 00:04:20.744 "seek_hole": false, 00:04:20.744 "seek_data": false, 00:04:20.744 "copy": true, 00:04:20.744 "nvme_iov_md": false 00:04:20.744 }, 00:04:20.744 "memory_domains": [ 00:04:20.744 { 00:04:20.744 "dma_device_id": "system", 00:04:20.744 "dma_device_type": 1 00:04:20.744 }, 00:04:20.744 { 00:04:20.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.744 "dma_device_type": 2 00:04:20.744 } 00:04:20.744 ], 00:04:20.744 "driver_specific": {} 00:04:20.744 }, 00:04:20.744 { 00:04:20.744 "name": "Passthru0", 00:04:20.744 "aliases": [ 00:04:20.744 "a6850bdc-e56e-5adf-bf73-5ae3fdc43a90" 00:04:20.744 ], 00:04:20.744 "product_name": "passthru", 00:04:20.744 "block_size": 512, 00:04:20.744 "num_blocks": 16384, 00:04:20.744 "uuid": "a6850bdc-e56e-5adf-bf73-5ae3fdc43a90", 00:04:20.744 "assigned_rate_limits": { 00:04:20.744 "rw_ios_per_sec": 0, 00:04:20.744 "rw_mbytes_per_sec": 0, 00:04:20.744 "r_mbytes_per_sec": 0, 00:04:20.744 "w_mbytes_per_sec": 0 00:04:20.744 }, 00:04:20.744 "claimed": false, 00:04:20.744 "zoned": false, 00:04:20.744 "supported_io_types": { 00:04:20.744 "read": true, 00:04:20.744 "write": true, 00:04:20.744 "unmap": true, 00:04:20.744 "flush": true, 00:04:20.744 "reset": true, 00:04:20.744 "nvme_admin": false, 00:04:20.744 "nvme_io": false, 00:04:20.744 "nvme_io_md": false, 00:04:20.744 "write_zeroes": true, 00:04:20.744 "zcopy": true, 00:04:20.744 "get_zone_info": false, 00:04:20.744 "zone_management": false, 00:04:20.744 "zone_append": false, 00:04:20.744 "compare": false, 00:04:20.744 "compare_and_write": false, 00:04:20.744 "abort": true, 00:04:20.744 "seek_hole": false, 00:04:20.744 "seek_data": false, 00:04:20.744 "copy": true, 00:04:20.744 "nvme_iov_md": false 00:04:20.744 }, 00:04:20.744 "memory_domains": [ 00:04:20.744 { 00:04:20.744 "dma_device_id": "system", 00:04:20.744 "dma_device_type": 1 00:04:20.744 }, 00:04:20.744 { 00:04:20.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.744 "dma_device_type": 2 00:04:20.744 } 00:04:20.744 ], 00:04:20.744 "driver_specific": { 00:04:20.744 "passthru": { 00:04:20.744 "name": "Passthru0", 00:04:20.744 "base_bdev_name": "Malloc0" 00:04:20.744 } 00:04:20.744 } 00:04:20.744 } 00:04:20.744 ]' 00:04:20.745 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:20.745 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:20.745 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:20.745 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.745 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.745 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.745 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:20.745 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.745 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.745 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.745 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.745 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.745 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.745 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.745 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.745 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:20.745 22:20:44 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:20.745 00:04:20.745 real 0m0.254s 00:04:20.745 user 0m0.171s 00:04:20.745 sys 0m0.030s 00:04:20.745 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.745 22:20:44 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.745 ************************************ 00:04:20.745 END TEST rpc_integrity 00:04:20.745 ************************************ 00:04:20.745 22:20:44 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:20.745 22:20:44 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:20.745 22:20:44 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.745 22:20:44 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.745 22:20:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.004 ************************************ 00:04:21.004 START TEST rpc_plugins 00:04:21.004 ************************************ 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:21.004 22:20:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.004 22:20:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:21.004 22:20:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.004 22:20:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:21.004 { 00:04:21.004 "name": "Malloc1", 00:04:21.004 "aliases": [ 00:04:21.004 "e760f2ed-0aa2-4fad-851b-df063bea155f" 00:04:21.004 ], 00:04:21.004 "product_name": "Malloc disk", 00:04:21.004 "block_size": 4096, 00:04:21.004 "num_blocks": 256, 00:04:21.004 "uuid": "e760f2ed-0aa2-4fad-851b-df063bea155f", 00:04:21.004 "assigned_rate_limits": { 00:04:21.004 "rw_ios_per_sec": 0, 00:04:21.004 "rw_mbytes_per_sec": 0, 00:04:21.004 "r_mbytes_per_sec": 0, 00:04:21.004 "w_mbytes_per_sec": 0 00:04:21.004 }, 00:04:21.004 "claimed": false, 00:04:21.004 "zoned": false, 00:04:21.004 "supported_io_types": { 00:04:21.004 "read": true, 00:04:21.004 "write": true, 00:04:21.004 "unmap": true, 00:04:21.004 "flush": true, 00:04:21.004 "reset": true, 00:04:21.004 "nvme_admin": false, 00:04:21.004 "nvme_io": false, 00:04:21.004 "nvme_io_md": false, 00:04:21.004 "write_zeroes": true, 00:04:21.004 "zcopy": true, 00:04:21.004 "get_zone_info": false, 00:04:21.004 "zone_management": false, 00:04:21.004 "zone_append": false, 00:04:21.004 "compare": false, 00:04:21.004 "compare_and_write": false, 00:04:21.004 "abort": true, 00:04:21.004 "seek_hole": false, 00:04:21.004 "seek_data": false, 00:04:21.004 "copy": true, 00:04:21.004 "nvme_iov_md": false 00:04:21.004 }, 00:04:21.004 "memory_domains": [ 00:04:21.004 { 00:04:21.004 "dma_device_id": "system", 00:04:21.004 "dma_device_type": 1 00:04:21.004 }, 00:04:21.004 { 00:04:21.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.004 "dma_device_type": 2 00:04:21.004 } 00:04:21.004 ], 00:04:21.004 "driver_specific": {} 00:04:21.004 } 00:04:21.004 ]' 00:04:21.004 22:20:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:21.004 22:20:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:21.004 22:20:44 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.004 22:20:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.004 22:20:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:21.004 22:20:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:21.004 22:20:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:21.004 00:04:21.004 real 0m0.136s 00:04:21.004 user 0m0.085s 00:04:21.004 sys 0m0.019s 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.004 22:20:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:21.004 ************************************ 00:04:21.004 END TEST rpc_plugins 00:04:21.004 ************************************ 00:04:21.004 22:20:44 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.004 22:20:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:21.004 22:20:44 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.004 22:20:44 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.004 22:20:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.004 ************************************ 00:04:21.004 START TEST rpc_trace_cmd_test 00:04:21.004 ************************************ 00:04:21.004 22:20:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:21.004 22:20:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:21.004 22:20:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:21.004 22:20:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.004 22:20:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.004 22:20:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.004 22:20:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:21.005 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4013134", 00:04:21.005 "tpoint_group_mask": "0x8", 00:04:21.005 "iscsi_conn": { 00:04:21.005 "mask": "0x2", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 }, 00:04:21.005 "scsi": { 00:04:21.005 "mask": "0x4", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 }, 00:04:21.005 "bdev": { 00:04:21.005 "mask": "0x8", 00:04:21.005 "tpoint_mask": "0xffffffffffffffff" 00:04:21.005 }, 00:04:21.005 "nvmf_rdma": { 00:04:21.005 "mask": "0x10", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 }, 00:04:21.005 "nvmf_tcp": { 00:04:21.005 "mask": "0x20", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 }, 00:04:21.005 "ftl": { 00:04:21.005 "mask": "0x40", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 }, 00:04:21.005 "blobfs": { 00:04:21.005 "mask": "0x80", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 }, 00:04:21.005 "dsa": { 00:04:21.005 "mask": "0x200", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 }, 00:04:21.005 "thread": { 00:04:21.005 "mask": "0x400", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 }, 00:04:21.005 "nvme_pcie": { 00:04:21.005 "mask": "0x800", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 }, 00:04:21.005 "iaa": { 00:04:21.005 "mask": "0x1000", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 }, 00:04:21.005 "nvme_tcp": { 00:04:21.005 "mask": "0x2000", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 }, 00:04:21.005 "bdev_nvme": { 00:04:21.005 "mask": "0x4000", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 }, 00:04:21.005 "sock": { 00:04:21.005 "mask": "0x8000", 00:04:21.005 "tpoint_mask": "0x0" 00:04:21.005 } 00:04:21.005 }' 00:04:21.005 22:20:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:21.279 22:20:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:21.279 22:20:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:21.279 22:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:21.279 22:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:21.279 22:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:21.279 22:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:21.279 22:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:21.279 22:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:21.279 22:20:45 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:21.279 00:04:21.279 real 0m0.223s 00:04:21.279 user 0m0.187s 00:04:21.279 sys 0m0.027s 00:04:21.279 22:20:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.279 22:20:45 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:21.279 ************************************ 00:04:21.279 END TEST rpc_trace_cmd_test 00:04:21.279 ************************************ 00:04:21.279 22:20:45 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.279 22:20:45 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:21.279 22:20:45 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:21.279 22:20:45 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:21.279 22:20:45 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.279 22:20:45 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.279 22:20:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.279 ************************************ 00:04:21.280 START TEST rpc_daemon_integrity 00:04:21.280 ************************************ 00:04:21.280 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:21.280 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:21.280 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.280 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.280 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.280 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:21.280 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:21.539 { 00:04:21.539 "name": "Malloc2", 00:04:21.539 "aliases": [ 00:04:21.539 "3a4c58f3-3209-42b9-be7f-2a25e06e7aa5" 00:04:21.539 ], 00:04:21.539 "product_name": "Malloc disk", 00:04:21.539 "block_size": 512, 00:04:21.539 "num_blocks": 16384, 00:04:21.539 "uuid": "3a4c58f3-3209-42b9-be7f-2a25e06e7aa5", 00:04:21.539 "assigned_rate_limits": { 00:04:21.539 "rw_ios_per_sec": 0, 00:04:21.539 "rw_mbytes_per_sec": 0, 00:04:21.539 "r_mbytes_per_sec": 0, 00:04:21.539 "w_mbytes_per_sec": 0 00:04:21.539 }, 00:04:21.539 "claimed": false, 00:04:21.539 "zoned": false, 00:04:21.539 "supported_io_types": { 00:04:21.539 "read": true, 00:04:21.539 "write": true, 00:04:21.539 "unmap": true, 00:04:21.539 "flush": true, 00:04:21.539 "reset": true, 00:04:21.539 "nvme_admin": false, 00:04:21.539 "nvme_io": false, 00:04:21.539 "nvme_io_md": false, 00:04:21.539 "write_zeroes": true, 00:04:21.539 "zcopy": true, 00:04:21.539 "get_zone_info": false, 00:04:21.539 "zone_management": false, 00:04:21.539 "zone_append": false, 00:04:21.539 "compare": false, 00:04:21.539 "compare_and_write": false, 00:04:21.539 "abort": true, 00:04:21.539 "seek_hole": false, 00:04:21.539 "seek_data": false, 00:04:21.539 "copy": true, 00:04:21.539 "nvme_iov_md": false 00:04:21.539 }, 00:04:21.539 "memory_domains": [ 00:04:21.539 { 00:04:21.539 "dma_device_id": "system", 00:04:21.539 "dma_device_type": 1 00:04:21.539 }, 00:04:21.539 { 00:04:21.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.539 "dma_device_type": 2 00:04:21.539 } 00:04:21.539 ], 00:04:21.539 "driver_specific": {} 00:04:21.539 } 00:04:21.539 ]' 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.539 [2024-07-15 22:20:45.342065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:21.539 [2024-07-15 22:20:45.342092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:21.539 [2024-07-15 22:20:45.342103] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1cc9ac0 00:04:21.539 [2024-07-15 22:20:45.342109] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:21.539 [2024-07-15 22:20:45.343077] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:21.539 [2024-07-15 22:20:45.343099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:21.539 Passthru0 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:21.539 { 00:04:21.539 "name": "Malloc2", 00:04:21.539 "aliases": [ 00:04:21.539 "3a4c58f3-3209-42b9-be7f-2a25e06e7aa5" 00:04:21.539 ], 00:04:21.539 "product_name": "Malloc disk", 00:04:21.539 "block_size": 512, 00:04:21.539 "num_blocks": 16384, 00:04:21.539 "uuid": "3a4c58f3-3209-42b9-be7f-2a25e06e7aa5", 00:04:21.539 "assigned_rate_limits": { 00:04:21.539 "rw_ios_per_sec": 0, 00:04:21.539 "rw_mbytes_per_sec": 0, 00:04:21.539 "r_mbytes_per_sec": 0, 00:04:21.539 "w_mbytes_per_sec": 0 00:04:21.539 }, 00:04:21.539 "claimed": true, 00:04:21.539 "claim_type": "exclusive_write", 00:04:21.539 "zoned": false, 00:04:21.539 "supported_io_types": { 00:04:21.539 "read": true, 00:04:21.539 "write": true, 00:04:21.539 "unmap": true, 00:04:21.539 "flush": true, 00:04:21.539 "reset": true, 00:04:21.539 "nvme_admin": false, 00:04:21.539 "nvme_io": false, 00:04:21.539 "nvme_io_md": false, 00:04:21.539 "write_zeroes": true, 00:04:21.539 "zcopy": true, 00:04:21.539 "get_zone_info": false, 00:04:21.539 "zone_management": false, 00:04:21.539 "zone_append": false, 00:04:21.539 "compare": false, 00:04:21.539 "compare_and_write": false, 00:04:21.539 "abort": true, 00:04:21.539 "seek_hole": false, 00:04:21.539 "seek_data": false, 00:04:21.539 "copy": true, 00:04:21.539 "nvme_iov_md": false 00:04:21.539 }, 00:04:21.539 "memory_domains": [ 00:04:21.539 { 00:04:21.539 "dma_device_id": "system", 00:04:21.539 "dma_device_type": 1 00:04:21.539 }, 00:04:21.539 { 00:04:21.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.539 "dma_device_type": 2 00:04:21.539 } 00:04:21.539 ], 00:04:21.539 "driver_specific": {} 00:04:21.539 }, 00:04:21.539 { 00:04:21.539 "name": "Passthru0", 00:04:21.539 "aliases": [ 00:04:21.539 "0185e7e6-ebb6-5fdd-9247-83a6932c168d" 00:04:21.539 ], 00:04:21.539 "product_name": "passthru", 00:04:21.539 "block_size": 512, 00:04:21.539 "num_blocks": 16384, 00:04:21.539 "uuid": "0185e7e6-ebb6-5fdd-9247-83a6932c168d", 00:04:21.539 "assigned_rate_limits": { 00:04:21.539 "rw_ios_per_sec": 0, 00:04:21.539 "rw_mbytes_per_sec": 0, 00:04:21.539 "r_mbytes_per_sec": 0, 00:04:21.539 "w_mbytes_per_sec": 0 00:04:21.539 }, 00:04:21.539 "claimed": false, 00:04:21.539 "zoned": false, 00:04:21.539 "supported_io_types": { 00:04:21.539 "read": true, 00:04:21.539 "write": true, 00:04:21.539 "unmap": true, 00:04:21.539 "flush": true, 00:04:21.539 "reset": true, 00:04:21.539 "nvme_admin": false, 00:04:21.539 "nvme_io": false, 00:04:21.539 "nvme_io_md": false, 00:04:21.539 "write_zeroes": true, 00:04:21.539 "zcopy": true, 00:04:21.539 "get_zone_info": false, 00:04:21.539 "zone_management": false, 00:04:21.539 "zone_append": false, 00:04:21.539 "compare": false, 00:04:21.539 "compare_and_write": false, 00:04:21.539 "abort": true, 00:04:21.539 "seek_hole": false, 00:04:21.539 "seek_data": false, 00:04:21.539 "copy": true, 00:04:21.539 "nvme_iov_md": false 00:04:21.539 }, 00:04:21.539 "memory_domains": [ 00:04:21.539 { 00:04:21.539 "dma_device_id": "system", 00:04:21.539 "dma_device_type": 1 00:04:21.539 }, 00:04:21.539 { 00:04:21.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:21.539 "dma_device_type": 2 00:04:21.539 } 00:04:21.539 ], 00:04:21.539 "driver_specific": { 00:04:21.539 "passthru": { 00:04:21.539 "name": "Passthru0", 00:04:21.539 "base_bdev_name": "Malloc2" 00:04:21.539 } 00:04:21.539 } 00:04:21.539 } 00:04:21.539 ]' 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:21.539 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:21.540 00:04:21.540 real 0m0.253s 00:04:21.540 user 0m0.170s 00:04:21.540 sys 0m0.030s 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.540 22:20:45 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:21.540 ************************************ 00:04:21.540 END TEST rpc_daemon_integrity 00:04:21.540 ************************************ 00:04:21.540 22:20:45 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.540 22:20:45 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:21.540 22:20:45 rpc -- rpc/rpc.sh@84 -- # killprocess 4013134 00:04:21.540 22:20:45 rpc -- common/autotest_common.sh@948 -- # '[' -z 4013134 ']' 00:04:21.540 22:20:45 rpc -- common/autotest_common.sh@952 -- # kill -0 4013134 00:04:21.540 22:20:45 rpc -- common/autotest_common.sh@953 -- # uname 00:04:21.540 22:20:45 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:21.540 22:20:45 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4013134 00:04:21.799 22:20:45 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:21.799 22:20:45 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:21.799 22:20:45 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4013134' 00:04:21.799 killing process with pid 4013134 00:04:21.799 22:20:45 rpc -- common/autotest_common.sh@967 -- # kill 4013134 00:04:21.799 22:20:45 rpc -- common/autotest_common.sh@972 -- # wait 4013134 00:04:22.059 00:04:22.059 real 0m2.397s 00:04:22.059 user 0m3.100s 00:04:22.059 sys 0m0.647s 00:04:22.059 22:20:45 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.059 22:20:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.059 ************************************ 00:04:22.059 END TEST rpc 00:04:22.059 ************************************ 00:04:22.059 22:20:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:22.059 22:20:45 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:22.059 22:20:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.059 22:20:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.059 22:20:45 -- common/autotest_common.sh@10 -- # set +x 00:04:22.059 ************************************ 00:04:22.059 START TEST skip_rpc 00:04:22.059 ************************************ 00:04:22.059 22:20:45 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:22.059 * Looking for test storage... 00:04:22.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:22.059 22:20:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:22.059 22:20:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:22.059 22:20:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:22.059 22:20:45 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.059 22:20:45 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.059 22:20:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.059 ************************************ 00:04:22.059 START TEST skip_rpc 00:04:22.059 ************************************ 00:04:22.059 22:20:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:22.059 22:20:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4013768 00:04:22.059 22:20:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.059 22:20:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:22.059 22:20:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:22.318 [2024-07-15 22:20:46.072526] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:22.318 [2024-07-15 22:20:46.072563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4013768 ] 00:04:22.318 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.318 [2024-07-15 22:20:46.124947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.318 [2024-07-15 22:20:46.197369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4013768 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 4013768 ']' 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 4013768 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4013768 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4013768' 00:04:27.607 killing process with pid 4013768 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 4013768 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 4013768 00:04:27.607 00:04:27.607 real 0m5.370s 00:04:27.607 user 0m5.139s 00:04:27.607 sys 0m0.250s 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.607 22:20:51 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.607 ************************************ 00:04:27.607 END TEST skip_rpc 00:04:27.607 ************************************ 00:04:27.607 22:20:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:27.607 22:20:51 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:27.607 22:20:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.607 22:20:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.607 22:20:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.607 ************************************ 00:04:27.607 START TEST skip_rpc_with_json 00:04:27.607 ************************************ 00:04:27.607 22:20:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:27.607 22:20:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:27.607 22:20:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4014720 00:04:27.607 22:20:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.607 22:20:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.607 22:20:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4014720 00:04:27.607 22:20:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 4014720 ']' 00:04:27.607 22:20:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.607 22:20:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.607 22:20:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.607 22:20:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.607 22:20:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:27.607 [2024-07-15 22:20:51.500607] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:27.607 [2024-07-15 22:20:51.500647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4014720 ] 00:04:27.607 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.607 [2024-07-15 22:20:51.552581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.866 [2024-07-15 22:20:51.633392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.432 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:28.432 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:28.432 22:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:28.432 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.432 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.432 [2024-07-15 22:20:52.298267] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:28.432 request: 00:04:28.432 { 00:04:28.432 "trtype": "tcp", 00:04:28.432 "method": "nvmf_get_transports", 00:04:28.433 "req_id": 1 00:04:28.433 } 00:04:28.433 Got JSON-RPC error response 00:04:28.433 response: 00:04:28.433 { 00:04:28.433 "code": -19, 00:04:28.433 "message": "No such device" 00:04:28.433 } 00:04:28.433 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:28.433 22:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:28.433 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.433 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.433 [2024-07-15 22:20:52.310352] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:28.433 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.433 22:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:28.433 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:28.433 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.690 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:28.690 22:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:28.690 { 00:04:28.691 "subsystems": [ 00:04:28.691 { 00:04:28.691 "subsystem": "vfio_user_target", 00:04:28.691 "config": null 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "keyring", 00:04:28.691 "config": [] 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "iobuf", 00:04:28.691 "config": [ 00:04:28.691 { 00:04:28.691 "method": "iobuf_set_options", 00:04:28.691 "params": { 00:04:28.691 "small_pool_count": 8192, 00:04:28.691 "large_pool_count": 1024, 00:04:28.691 "small_bufsize": 8192, 00:04:28.691 "large_bufsize": 135168 00:04:28.691 } 00:04:28.691 } 00:04:28.691 ] 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "sock", 00:04:28.691 "config": [ 00:04:28.691 { 00:04:28.691 "method": "sock_set_default_impl", 00:04:28.691 "params": { 00:04:28.691 "impl_name": "posix" 00:04:28.691 } 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "method": "sock_impl_set_options", 00:04:28.691 "params": { 00:04:28.691 "impl_name": "ssl", 00:04:28.691 "recv_buf_size": 4096, 00:04:28.691 "send_buf_size": 4096, 00:04:28.691 "enable_recv_pipe": true, 00:04:28.691 "enable_quickack": false, 00:04:28.691 "enable_placement_id": 0, 00:04:28.691 "enable_zerocopy_send_server": true, 00:04:28.691 "enable_zerocopy_send_client": false, 00:04:28.691 "zerocopy_threshold": 0, 00:04:28.691 "tls_version": 0, 00:04:28.691 "enable_ktls": false 00:04:28.691 } 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "method": "sock_impl_set_options", 00:04:28.691 "params": { 00:04:28.691 "impl_name": "posix", 00:04:28.691 "recv_buf_size": 2097152, 00:04:28.691 "send_buf_size": 2097152, 00:04:28.691 "enable_recv_pipe": true, 00:04:28.691 "enable_quickack": false, 00:04:28.691 "enable_placement_id": 0, 00:04:28.691 "enable_zerocopy_send_server": true, 00:04:28.691 "enable_zerocopy_send_client": false, 00:04:28.691 "zerocopy_threshold": 0, 00:04:28.691 "tls_version": 0, 00:04:28.691 "enable_ktls": false 00:04:28.691 } 00:04:28.691 } 00:04:28.691 ] 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "vmd", 00:04:28.691 "config": [] 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "accel", 00:04:28.691 "config": [ 00:04:28.691 { 00:04:28.691 "method": "accel_set_options", 00:04:28.691 "params": { 00:04:28.691 "small_cache_size": 128, 00:04:28.691 "large_cache_size": 16, 00:04:28.691 "task_count": 2048, 00:04:28.691 "sequence_count": 2048, 00:04:28.691 "buf_count": 2048 00:04:28.691 } 00:04:28.691 } 00:04:28.691 ] 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "bdev", 00:04:28.691 "config": [ 00:04:28.691 { 00:04:28.691 "method": "bdev_set_options", 00:04:28.691 "params": { 00:04:28.691 "bdev_io_pool_size": 65535, 00:04:28.691 "bdev_io_cache_size": 256, 00:04:28.691 "bdev_auto_examine": true, 00:04:28.691 "iobuf_small_cache_size": 128, 00:04:28.691 "iobuf_large_cache_size": 16 00:04:28.691 } 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "method": "bdev_raid_set_options", 00:04:28.691 "params": { 00:04:28.691 "process_window_size_kb": 1024 00:04:28.691 } 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "method": "bdev_iscsi_set_options", 00:04:28.691 "params": { 00:04:28.691 "timeout_sec": 30 00:04:28.691 } 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "method": "bdev_nvme_set_options", 00:04:28.691 "params": { 00:04:28.691 "action_on_timeout": "none", 00:04:28.691 "timeout_us": 0, 00:04:28.691 "timeout_admin_us": 0, 00:04:28.691 "keep_alive_timeout_ms": 10000, 00:04:28.691 "arbitration_burst": 0, 00:04:28.691 "low_priority_weight": 0, 00:04:28.691 "medium_priority_weight": 0, 00:04:28.691 "high_priority_weight": 0, 00:04:28.691 "nvme_adminq_poll_period_us": 10000, 00:04:28.691 "nvme_ioq_poll_period_us": 0, 00:04:28.691 "io_queue_requests": 0, 00:04:28.691 "delay_cmd_submit": true, 00:04:28.691 "transport_retry_count": 4, 00:04:28.691 "bdev_retry_count": 3, 00:04:28.691 "transport_ack_timeout": 0, 00:04:28.691 "ctrlr_loss_timeout_sec": 0, 00:04:28.691 "reconnect_delay_sec": 0, 00:04:28.691 "fast_io_fail_timeout_sec": 0, 00:04:28.691 "disable_auto_failback": false, 00:04:28.691 "generate_uuids": false, 00:04:28.691 "transport_tos": 0, 00:04:28.691 "nvme_error_stat": false, 00:04:28.691 "rdma_srq_size": 0, 00:04:28.691 "io_path_stat": false, 00:04:28.691 "allow_accel_sequence": false, 00:04:28.691 "rdma_max_cq_size": 0, 00:04:28.691 "rdma_cm_event_timeout_ms": 0, 00:04:28.691 "dhchap_digests": [ 00:04:28.691 "sha256", 00:04:28.691 "sha384", 00:04:28.691 "sha512" 00:04:28.691 ], 00:04:28.691 "dhchap_dhgroups": [ 00:04:28.691 "null", 00:04:28.691 "ffdhe2048", 00:04:28.691 "ffdhe3072", 00:04:28.691 "ffdhe4096", 00:04:28.691 "ffdhe6144", 00:04:28.691 "ffdhe8192" 00:04:28.691 ] 00:04:28.691 } 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "method": "bdev_nvme_set_hotplug", 00:04:28.691 "params": { 00:04:28.691 "period_us": 100000, 00:04:28.691 "enable": false 00:04:28.691 } 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "method": "bdev_wait_for_examine" 00:04:28.691 } 00:04:28.691 ] 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "scsi", 00:04:28.691 "config": null 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "scheduler", 00:04:28.691 "config": [ 00:04:28.691 { 00:04:28.691 "method": "framework_set_scheduler", 00:04:28.691 "params": { 00:04:28.691 "name": "static" 00:04:28.691 } 00:04:28.691 } 00:04:28.691 ] 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "vhost_scsi", 00:04:28.691 "config": [] 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "vhost_blk", 00:04:28.691 "config": [] 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "ublk", 00:04:28.691 "config": [] 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "nbd", 00:04:28.691 "config": [] 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "nvmf", 00:04:28.691 "config": [ 00:04:28.691 { 00:04:28.691 "method": "nvmf_set_config", 00:04:28.691 "params": { 00:04:28.691 "discovery_filter": "match_any", 00:04:28.691 "admin_cmd_passthru": { 00:04:28.691 "identify_ctrlr": false 00:04:28.691 } 00:04:28.691 } 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "method": "nvmf_set_max_subsystems", 00:04:28.691 "params": { 00:04:28.691 "max_subsystems": 1024 00:04:28.691 } 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "method": "nvmf_set_crdt", 00:04:28.691 "params": { 00:04:28.691 "crdt1": 0, 00:04:28.691 "crdt2": 0, 00:04:28.691 "crdt3": 0 00:04:28.691 } 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "method": "nvmf_create_transport", 00:04:28.691 "params": { 00:04:28.691 "trtype": "TCP", 00:04:28.691 "max_queue_depth": 128, 00:04:28.691 "max_io_qpairs_per_ctrlr": 127, 00:04:28.691 "in_capsule_data_size": 4096, 00:04:28.691 "max_io_size": 131072, 00:04:28.691 "io_unit_size": 131072, 00:04:28.691 "max_aq_depth": 128, 00:04:28.691 "num_shared_buffers": 511, 00:04:28.691 "buf_cache_size": 4294967295, 00:04:28.691 "dif_insert_or_strip": false, 00:04:28.691 "zcopy": false, 00:04:28.691 "c2h_success": true, 00:04:28.691 "sock_priority": 0, 00:04:28.691 "abort_timeout_sec": 1, 00:04:28.691 "ack_timeout": 0, 00:04:28.691 "data_wr_pool_size": 0 00:04:28.691 } 00:04:28.691 } 00:04:28.691 ] 00:04:28.691 }, 00:04:28.691 { 00:04:28.691 "subsystem": "iscsi", 00:04:28.691 "config": [ 00:04:28.691 { 00:04:28.691 "method": "iscsi_set_options", 00:04:28.691 "params": { 00:04:28.691 "node_base": "iqn.2016-06.io.spdk", 00:04:28.691 "max_sessions": 128, 00:04:28.691 "max_connections_per_session": 2, 00:04:28.691 "max_queue_depth": 64, 00:04:28.691 "default_time2wait": 2, 00:04:28.691 "default_time2retain": 20, 00:04:28.691 "first_burst_length": 8192, 00:04:28.691 "immediate_data": true, 00:04:28.691 "allow_duplicated_isid": false, 00:04:28.691 "error_recovery_level": 0, 00:04:28.691 "nop_timeout": 60, 00:04:28.691 "nop_in_interval": 30, 00:04:28.691 "disable_chap": false, 00:04:28.691 "require_chap": false, 00:04:28.691 "mutual_chap": false, 00:04:28.691 "chap_group": 0, 00:04:28.691 "max_large_datain_per_connection": 64, 00:04:28.691 "max_r2t_per_connection": 4, 00:04:28.691 "pdu_pool_size": 36864, 00:04:28.691 "immediate_data_pool_size": 16384, 00:04:28.691 "data_out_pool_size": 2048 00:04:28.691 } 00:04:28.691 } 00:04:28.691 ] 00:04:28.691 } 00:04:28.691 ] 00:04:28.691 } 00:04:28.691 22:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:28.691 22:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4014720 00:04:28.691 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 4014720 ']' 00:04:28.691 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 4014720 00:04:28.691 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:28.691 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:28.691 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4014720 00:04:28.691 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:28.691 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:28.691 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4014720' 00:04:28.691 killing process with pid 4014720 00:04:28.691 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 4014720 00:04:28.691 22:20:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 4014720 00:04:28.949 22:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4014957 00:04:28.949 22:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:28.949 22:20:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:34.261 22:20:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4014957 00:04:34.261 22:20:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 4014957 ']' 00:04:34.261 22:20:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 4014957 00:04:34.261 22:20:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:34.261 22:20:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:34.261 22:20:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4014957 00:04:34.261 22:20:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:34.261 22:20:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:34.261 22:20:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4014957' 00:04:34.261 killing process with pid 4014957 00:04:34.261 22:20:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 4014957 00:04:34.261 22:20:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 4014957 00:04:34.261 22:20:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:34.261 22:20:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:34.261 00:04:34.261 real 0m6.740s 00:04:34.261 user 0m6.582s 00:04:34.261 sys 0m0.583s 00:04:34.261 22:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.261 22:20:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:34.261 ************************************ 00:04:34.261 END TEST skip_rpc_with_json 00:04:34.261 ************************************ 00:04:34.261 22:20:58 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.261 22:20:58 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:34.261 22:20:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.261 22:20:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.261 22:20:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.521 ************************************ 00:04:34.521 START TEST skip_rpc_with_delay 00:04:34.521 ************************************ 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:34.521 [2024-07-15 22:20:58.292064] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:34.521 [2024-07-15 22:20:58.292127] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:34.521 00:04:34.521 real 0m0.051s 00:04:34.521 user 0m0.029s 00:04:34.521 sys 0m0.022s 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.521 22:20:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:34.521 ************************************ 00:04:34.521 END TEST skip_rpc_with_delay 00:04:34.521 ************************************ 00:04:34.521 22:20:58 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:34.521 22:20:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:34.521 22:20:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:34.521 22:20:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:34.521 22:20:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.521 22:20:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.521 22:20:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.521 ************************************ 00:04:34.521 START TEST exit_on_failed_rpc_init 00:04:34.521 ************************************ 00:04:34.521 22:20:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:34.521 22:20:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4015927 00:04:34.521 22:20:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4015927 00:04:34.521 22:20:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.521 22:20:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 4015927 ']' 00:04:34.521 22:20:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.521 22:20:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.521 22:20:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.521 22:20:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.521 22:20:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:34.521 [2024-07-15 22:20:58.424494] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:34.521 [2024-07-15 22:20:58.424535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015927 ] 00:04:34.521 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.521 [2024-07-15 22:20:58.475623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.781 [2024-07-15 22:20:58.555479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:35.350 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:35.350 [2024-07-15 22:20:59.266000] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:35.350 [2024-07-15 22:20:59.266048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4015961 ] 00:04:35.350 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.350 [2024-07-15 22:20:59.317067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.609 [2024-07-15 22:20:59.391702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.609 [2024-07-15 22:20:59.391782] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:35.609 [2024-07-15 22:20:59.391792] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:35.609 [2024-07-15 22:20:59.391798] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4015927 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 4015927 ']' 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 4015927 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4015927 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4015927' 00:04:35.609 killing process with pid 4015927 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 4015927 00:04:35.609 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 4015927 00:04:35.869 00:04:35.869 real 0m1.443s 00:04:35.869 user 0m1.673s 00:04:35.869 sys 0m0.372s 00:04:35.869 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.869 22:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:35.869 ************************************ 00:04:35.869 END TEST exit_on_failed_rpc_init 00:04:35.869 ************************************ 00:04:36.128 22:20:59 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:36.128 22:20:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:36.128 00:04:36.128 real 0m13.940s 00:04:36.128 user 0m13.566s 00:04:36.128 sys 0m1.442s 00:04:36.128 22:20:59 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.128 22:20:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.128 ************************************ 00:04:36.128 END TEST skip_rpc 00:04:36.128 ************************************ 00:04:36.128 22:20:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.128 22:20:59 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:36.128 22:20:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.128 22:20:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.128 22:20:59 -- common/autotest_common.sh@10 -- # set +x 00:04:36.128 ************************************ 00:04:36.128 START TEST rpc_client 00:04:36.128 ************************************ 00:04:36.128 22:20:59 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:36.128 * Looking for test storage... 00:04:36.128 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:36.128 22:21:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:36.128 OK 00:04:36.128 22:21:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:36.128 00:04:36.128 real 0m0.110s 00:04:36.128 user 0m0.056s 00:04:36.128 sys 0m0.061s 00:04:36.128 22:21:00 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.129 22:21:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:36.129 ************************************ 00:04:36.129 END TEST rpc_client 00:04:36.129 ************************************ 00:04:36.129 22:21:00 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.129 22:21:00 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:36.129 22:21:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.129 22:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.129 22:21:00 -- common/autotest_common.sh@10 -- # set +x 00:04:36.129 ************************************ 00:04:36.129 START TEST json_config 00:04:36.129 ************************************ 00:04:36.129 22:21:00 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:36.388 22:21:00 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:36.388 22:21:00 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.388 22:21:00 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.388 22:21:00 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.388 22:21:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.388 22:21:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.388 22:21:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.388 22:21:00 json_config -- paths/export.sh@5 -- # export PATH 00:04:36.388 22:21:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@47 -- # : 0 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:36.388 22:21:00 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:36.389 22:21:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.389 22:21:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.389 22:21:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.389 22:21:00 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:36.389 22:21:00 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:36.389 22:21:00 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:36.389 INFO: JSON configuration test init 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:36.389 22:21:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.389 22:21:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:36.389 22:21:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.389 22:21:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 22:21:00 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:36.389 22:21:00 json_config -- json_config/common.sh@9 -- # local app=target 00:04:36.389 22:21:00 json_config -- json_config/common.sh@10 -- # shift 00:04:36.389 22:21:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.389 22:21:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.389 22:21:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.389 22:21:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.389 22:21:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.389 22:21:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4016304 00:04:36.389 22:21:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.389 Waiting for target to run... 00:04:36.389 22:21:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:36.389 22:21:00 json_config -- json_config/common.sh@25 -- # waitforlisten 4016304 /var/tmp/spdk_tgt.sock 00:04:36.389 22:21:00 json_config -- common/autotest_common.sh@829 -- # '[' -z 4016304 ']' 00:04:36.389 22:21:00 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.389 22:21:00 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.389 22:21:00 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.389 22:21:00 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.389 22:21:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.389 [2024-07-15 22:21:00.244817] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:36.389 [2024-07-15 22:21:00.244862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4016304 ] 00:04:36.389 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.957 [2024-07-15 22:21:00.673502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.957 [2024-07-15 22:21:00.761763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.216 22:21:01 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.216 22:21:01 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:37.216 22:21:01 json_config -- json_config/common.sh@26 -- # echo '' 00:04:37.216 00:04:37.216 22:21:01 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:37.216 22:21:01 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:37.216 22:21:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.216 22:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.216 22:21:01 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:37.216 22:21:01 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:37.216 22:21:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:37.216 22:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.216 22:21:01 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:37.216 22:21:01 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:37.216 22:21:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:40.509 22:21:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.509 22:21:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:40.509 22:21:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:40.509 22:21:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:40.509 22:21:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:40.509 22:21:04 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.509 22:21:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:40.509 22:21:04 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:40.509 22:21:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:40.769 MallocForNvmf0 00:04:40.769 22:21:04 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:40.769 22:21:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:40.769 MallocForNvmf1 00:04:40.769 22:21:04 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:40.769 22:21:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:41.067 [2024-07-15 22:21:04.856514] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:41.067 22:21:04 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:41.067 22:21:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:41.326 22:21:05 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:41.326 22:21:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:41.326 22:21:05 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:41.326 22:21:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:41.585 22:21:05 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:41.585 22:21:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:41.585 [2024-07-15 22:21:05.542667] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:41.845 22:21:05 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:41.845 22:21:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.845 22:21:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.845 22:21:05 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:41.845 22:21:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.845 22:21:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.845 22:21:05 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:41.845 22:21:05 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:41.845 22:21:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:41.845 MallocBdevForConfigChangeCheck 00:04:41.845 22:21:05 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:41.845 22:21:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:41.845 22:21:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.104 22:21:05 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:42.104 22:21:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.363 22:21:06 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:42.363 INFO: shutting down applications... 00:04:42.363 22:21:06 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:42.363 22:21:06 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:42.363 22:21:06 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:42.363 22:21:06 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:43.744 Calling clear_iscsi_subsystem 00:04:43.744 Calling clear_nvmf_subsystem 00:04:43.744 Calling clear_nbd_subsystem 00:04:43.744 Calling clear_ublk_subsystem 00:04:43.744 Calling clear_vhost_blk_subsystem 00:04:43.744 Calling clear_vhost_scsi_subsystem 00:04:43.744 Calling clear_bdev_subsystem 00:04:43.744 22:21:07 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:43.744 22:21:07 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:43.744 22:21:07 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:43.744 22:21:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.744 22:21:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:43.744 22:21:07 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:44.311 22:21:08 json_config -- json_config/json_config.sh@345 -- # break 00:04:44.311 22:21:08 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:44.311 22:21:08 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:44.311 22:21:08 json_config -- json_config/common.sh@31 -- # local app=target 00:04:44.311 22:21:08 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:44.311 22:21:08 json_config -- json_config/common.sh@35 -- # [[ -n 4016304 ]] 00:04:44.311 22:21:08 json_config -- json_config/common.sh@38 -- # kill -SIGINT 4016304 00:04:44.311 22:21:08 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:44.311 22:21:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.311 22:21:08 json_config -- json_config/common.sh@41 -- # kill -0 4016304 00:04:44.311 22:21:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.570 22:21:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.570 22:21:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.570 22:21:08 json_config -- json_config/common.sh@41 -- # kill -0 4016304 00:04:44.570 22:21:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:44.570 22:21:08 json_config -- json_config/common.sh@43 -- # break 00:04:44.570 22:21:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:44.570 22:21:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:44.570 SPDK target shutdown done 00:04:44.570 22:21:08 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:44.570 INFO: relaunching applications... 00:04:44.570 22:21:08 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.570 22:21:08 json_config -- json_config/common.sh@9 -- # local app=target 00:04:44.570 22:21:08 json_config -- json_config/common.sh@10 -- # shift 00:04:44.570 22:21:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.570 22:21:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.570 22:21:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.570 22:21:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.570 22:21:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.570 22:21:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=4018076 00:04:44.570 22:21:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.570 Waiting for target to run... 00:04:44.570 22:21:08 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.570 22:21:08 json_config -- json_config/common.sh@25 -- # waitforlisten 4018076 /var/tmp/spdk_tgt.sock 00:04:44.570 22:21:08 json_config -- common/autotest_common.sh@829 -- # '[' -z 4018076 ']' 00:04:44.570 22:21:08 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.570 22:21:08 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.570 22:21:08 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.570 22:21:08 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.570 22:21:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.829 [2024-07-15 22:21:08.582776] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:44.829 [2024-07-15 22:21:08.582831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4018076 ] 00:04:44.829 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.109 [2024-07-15 22:21:09.010048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.368 [2024-07-15 22:21:09.094277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.659 [2024-07-15 22:21:12.110666] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.659 [2024-07-15 22:21:12.142968] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:48.918 22:21:12 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.918 22:21:12 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:48.918 22:21:12 json_config -- json_config/common.sh@26 -- # echo '' 00:04:48.918 00:04:48.918 22:21:12 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:48.918 22:21:12 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:48.919 INFO: Checking if target configuration is the same... 00:04:48.919 22:21:12 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:48.919 22:21:12 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.919 22:21:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.919 + '[' 2 -ne 2 ']' 00:04:48.919 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:48.919 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:48.919 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:48.919 +++ basename /dev/fd/62 00:04:48.919 ++ mktemp /tmp/62.XXX 00:04:48.919 + tmp_file_1=/tmp/62.bRM 00:04:48.919 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.919 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.919 + tmp_file_2=/tmp/spdk_tgt_config.json.ySL 00:04:48.919 + ret=0 00:04:48.919 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:49.178 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:49.178 + diff -u /tmp/62.bRM /tmp/spdk_tgt_config.json.ySL 00:04:49.178 + echo 'INFO: JSON config files are the same' 00:04:49.178 INFO: JSON config files are the same 00:04:49.178 + rm /tmp/62.bRM /tmp/spdk_tgt_config.json.ySL 00:04:49.178 + exit 0 00:04:49.178 22:21:13 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:49.178 22:21:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:49.178 INFO: changing configuration and checking if this can be detected... 00:04:49.178 22:21:13 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:49.178 22:21:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:49.438 22:21:13 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:49.438 22:21:13 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:49.438 22:21:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.438 + '[' 2 -ne 2 ']' 00:04:49.438 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:49.438 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:49.438 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:49.438 +++ basename /dev/fd/62 00:04:49.438 ++ mktemp /tmp/62.XXX 00:04:49.438 + tmp_file_1=/tmp/62.enY 00:04:49.438 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:49.438 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:49.438 + tmp_file_2=/tmp/spdk_tgt_config.json.gXO 00:04:49.438 + ret=0 00:04:49.438 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:49.697 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:49.697 + diff -u /tmp/62.enY /tmp/spdk_tgt_config.json.gXO 00:04:49.697 + ret=1 00:04:49.697 + echo '=== Start of file: /tmp/62.enY ===' 00:04:49.697 + cat /tmp/62.enY 00:04:49.697 + echo '=== End of file: /tmp/62.enY ===' 00:04:49.697 + echo '' 00:04:49.697 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gXO ===' 00:04:49.697 + cat /tmp/spdk_tgt_config.json.gXO 00:04:49.697 + echo '=== End of file: /tmp/spdk_tgt_config.json.gXO ===' 00:04:49.697 + echo '' 00:04:49.697 + rm /tmp/62.enY /tmp/spdk_tgt_config.json.gXO 00:04:49.697 + exit 1 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:49.697 INFO: configuration change detected. 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:49.697 22:21:13 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.697 22:21:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@317 -- # [[ -n 4018076 ]] 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:49.697 22:21:13 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.697 22:21:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:49.697 22:21:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.697 22:21:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.697 22:21:13 json_config -- json_config/json_config.sh@323 -- # killprocess 4018076 00:04:49.697 22:21:13 json_config -- common/autotest_common.sh@948 -- # '[' -z 4018076 ']' 00:04:49.697 22:21:13 json_config -- common/autotest_common.sh@952 -- # kill -0 4018076 00:04:49.697 22:21:13 json_config -- common/autotest_common.sh@953 -- # uname 00:04:49.955 22:21:13 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.955 22:21:13 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4018076 00:04:49.955 22:21:13 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.955 22:21:13 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.955 22:21:13 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4018076' 00:04:49.955 killing process with pid 4018076 00:04:49.955 22:21:13 json_config -- common/autotest_common.sh@967 -- # kill 4018076 00:04:49.955 22:21:13 json_config -- common/autotest_common.sh@972 -- # wait 4018076 00:04:51.331 22:21:15 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.331 22:21:15 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:51.331 22:21:15 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:51.331 22:21:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.332 22:21:15 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:51.332 22:21:15 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:51.332 INFO: Success 00:04:51.332 00:04:51.332 real 0m15.122s 00:04:51.332 user 0m15.688s 00:04:51.332 sys 0m2.032s 00:04:51.332 22:21:15 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.332 22:21:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.332 ************************************ 00:04:51.332 END TEST json_config 00:04:51.332 ************************************ 00:04:51.332 22:21:15 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.332 22:21:15 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:51.332 22:21:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.332 22:21:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.332 22:21:15 -- common/autotest_common.sh@10 -- # set +x 00:04:51.332 ************************************ 00:04:51.332 START TEST json_config_extra_key 00:04:51.332 ************************************ 00:04:51.332 22:21:15 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:51.590 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.590 22:21:15 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.590 22:21:15 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.590 22:21:15 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.590 22:21:15 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.591 22:21:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.591 22:21:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.591 22:21:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.591 22:21:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:51.591 22:21:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.591 22:21:15 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:51.591 22:21:15 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:51.591 22:21:15 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:51.591 22:21:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.591 22:21:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.591 22:21:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.591 22:21:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:51.591 22:21:15 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:51.591 22:21:15 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:51.591 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:51.591 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:51.591 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:51.591 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:51.591 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:51.591 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:51.591 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:51.591 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:51.591 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:51.591 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.591 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:51.591 INFO: launching applications... 00:04:51.591 22:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:51.591 22:21:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:51.591 22:21:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:51.591 22:21:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.591 22:21:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.591 22:21:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.591 22:21:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.591 22:21:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.591 22:21:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4019576 00:04:51.591 22:21:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.591 Waiting for target to run... 00:04:51.591 22:21:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4019576 /var/tmp/spdk_tgt.sock 00:04:51.591 22:21:15 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 4019576 ']' 00:04:51.591 22:21:15 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:51.591 22:21:15 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.591 22:21:15 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.591 22:21:15 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.591 22:21:15 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.591 22:21:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.591 [2024-07-15 22:21:15.421000] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:51.591 [2024-07-15 22:21:15.421049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4019576 ] 00:04:51.591 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.157 [2024-07-15 22:21:15.854816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.157 [2024-07-15 22:21:15.940983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.416 22:21:16 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.416 22:21:16 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:52.416 22:21:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:52.416 00:04:52.416 22:21:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:52.416 INFO: shutting down applications... 00:04:52.416 22:21:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:52.416 22:21:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:52.416 22:21:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:52.416 22:21:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4019576 ]] 00:04:52.416 22:21:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4019576 00:04:52.416 22:21:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:52.416 22:21:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.416 22:21:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4019576 00:04:52.416 22:21:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.983 22:21:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.983 22:21:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.983 22:21:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4019576 00:04:52.983 22:21:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:52.983 22:21:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:52.983 22:21:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:52.983 22:21:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:52.983 SPDK target shutdown done 00:04:52.983 22:21:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:52.983 Success 00:04:52.983 00:04:52.983 real 0m1.430s 00:04:52.983 user 0m1.047s 00:04:52.983 sys 0m0.515s 00:04:52.983 22:21:16 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.983 22:21:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:52.983 ************************************ 00:04:52.983 END TEST json_config_extra_key 00:04:52.983 ************************************ 00:04:52.983 22:21:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:52.983 22:21:16 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.983 22:21:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.983 22:21:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.983 22:21:16 -- common/autotest_common.sh@10 -- # set +x 00:04:52.983 ************************************ 00:04:52.983 START TEST alias_rpc 00:04:52.983 ************************************ 00:04:52.983 22:21:16 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.983 * Looking for test storage... 00:04:52.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:52.983 22:21:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:52.983 22:21:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4019860 00:04:52.983 22:21:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4019860 00:04:52.983 22:21:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.983 22:21:16 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 4019860 ']' 00:04:52.983 22:21:16 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.983 22:21:16 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.983 22:21:16 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.983 22:21:16 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.983 22:21:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.983 [2024-07-15 22:21:16.904723] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:52.983 [2024-07-15 22:21:16.904772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4019860 ] 00:04:52.983 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.242 [2024-07-15 22:21:16.957878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.242 [2024-07-15 22:21:17.037574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.811 22:21:17 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.811 22:21:17 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:53.811 22:21:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:54.070 22:21:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4019860 00:04:54.070 22:21:17 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 4019860 ']' 00:04:54.070 22:21:17 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 4019860 00:04:54.070 22:21:17 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:54.070 22:21:17 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:54.070 22:21:17 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4019860 00:04:54.070 22:21:17 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:54.070 22:21:17 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:54.070 22:21:17 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4019860' 00:04:54.070 killing process with pid 4019860 00:04:54.070 22:21:17 alias_rpc -- common/autotest_common.sh@967 -- # kill 4019860 00:04:54.070 22:21:17 alias_rpc -- common/autotest_common.sh@972 -- # wait 4019860 00:04:54.329 00:04:54.329 real 0m1.459s 00:04:54.329 user 0m1.597s 00:04:54.329 sys 0m0.383s 00:04:54.329 22:21:18 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.329 22:21:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.329 ************************************ 00:04:54.329 END TEST alias_rpc 00:04:54.329 ************************************ 00:04:54.329 22:21:18 -- common/autotest_common.sh@1142 -- # return 0 00:04:54.329 22:21:18 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:54.329 22:21:18 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:54.329 22:21:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.329 22:21:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.329 22:21:18 -- common/autotest_common.sh@10 -- # set +x 00:04:54.329 ************************************ 00:04:54.329 START TEST spdkcli_tcp 00:04:54.329 ************************************ 00:04:54.329 22:21:18 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:54.588 * Looking for test storage... 00:04:54.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:54.588 22:21:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:54.588 22:21:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:54.588 22:21:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:54.588 22:21:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:54.588 22:21:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:54.588 22:21:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:54.588 22:21:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:54.588 22:21:18 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.588 22:21:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.588 22:21:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4020158 00:04:54.588 22:21:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:54.588 22:21:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4020158 00:04:54.588 22:21:18 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 4020158 ']' 00:04:54.588 22:21:18 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.588 22:21:18 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.588 22:21:18 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.588 22:21:18 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.588 22:21:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.588 [2024-07-15 22:21:18.406151] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:54.588 [2024-07-15 22:21:18.406202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4020158 ] 00:04:54.588 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.588 [2024-07-15 22:21:18.460781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.588 [2024-07-15 22:21:18.542354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.588 [2024-07-15 22:21:18.542358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.527 22:21:19 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.527 22:21:19 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:55.527 22:21:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4020374 00:04:55.527 22:21:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:55.527 22:21:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:55.527 [ 00:04:55.527 "bdev_malloc_delete", 00:04:55.527 "bdev_malloc_create", 00:04:55.527 "bdev_null_resize", 00:04:55.527 "bdev_null_delete", 00:04:55.527 "bdev_null_create", 00:04:55.527 "bdev_nvme_cuse_unregister", 00:04:55.527 "bdev_nvme_cuse_register", 00:04:55.527 "bdev_opal_new_user", 00:04:55.527 "bdev_opal_set_lock_state", 00:04:55.527 "bdev_opal_delete", 00:04:55.527 "bdev_opal_get_info", 00:04:55.527 "bdev_opal_create", 00:04:55.527 "bdev_nvme_opal_revert", 00:04:55.527 "bdev_nvme_opal_init", 00:04:55.527 "bdev_nvme_send_cmd", 00:04:55.527 "bdev_nvme_get_path_iostat", 00:04:55.527 "bdev_nvme_get_mdns_discovery_info", 00:04:55.527 "bdev_nvme_stop_mdns_discovery", 00:04:55.527 "bdev_nvme_start_mdns_discovery", 00:04:55.527 "bdev_nvme_set_multipath_policy", 00:04:55.527 "bdev_nvme_set_preferred_path", 00:04:55.527 "bdev_nvme_get_io_paths", 00:04:55.527 "bdev_nvme_remove_error_injection", 00:04:55.527 "bdev_nvme_add_error_injection", 00:04:55.527 "bdev_nvme_get_discovery_info", 00:04:55.527 "bdev_nvme_stop_discovery", 00:04:55.527 "bdev_nvme_start_discovery", 00:04:55.527 "bdev_nvme_get_controller_health_info", 00:04:55.527 "bdev_nvme_disable_controller", 00:04:55.527 "bdev_nvme_enable_controller", 00:04:55.527 "bdev_nvme_reset_controller", 00:04:55.527 "bdev_nvme_get_transport_statistics", 00:04:55.527 "bdev_nvme_apply_firmware", 00:04:55.527 "bdev_nvme_detach_controller", 00:04:55.527 "bdev_nvme_get_controllers", 00:04:55.527 "bdev_nvme_attach_controller", 00:04:55.527 "bdev_nvme_set_hotplug", 00:04:55.527 "bdev_nvme_set_options", 00:04:55.527 "bdev_passthru_delete", 00:04:55.527 "bdev_passthru_create", 00:04:55.527 "bdev_lvol_set_parent_bdev", 00:04:55.527 "bdev_lvol_set_parent", 00:04:55.527 "bdev_lvol_check_shallow_copy", 00:04:55.527 "bdev_lvol_start_shallow_copy", 00:04:55.527 "bdev_lvol_grow_lvstore", 00:04:55.527 "bdev_lvol_get_lvols", 00:04:55.527 "bdev_lvol_get_lvstores", 00:04:55.527 "bdev_lvol_delete", 00:04:55.527 "bdev_lvol_set_read_only", 00:04:55.527 "bdev_lvol_resize", 00:04:55.527 "bdev_lvol_decouple_parent", 00:04:55.527 "bdev_lvol_inflate", 00:04:55.527 "bdev_lvol_rename", 00:04:55.527 "bdev_lvol_clone_bdev", 00:04:55.527 "bdev_lvol_clone", 00:04:55.527 "bdev_lvol_snapshot", 00:04:55.527 "bdev_lvol_create", 00:04:55.527 "bdev_lvol_delete_lvstore", 00:04:55.527 "bdev_lvol_rename_lvstore", 00:04:55.527 "bdev_lvol_create_lvstore", 00:04:55.527 "bdev_raid_set_options", 00:04:55.527 "bdev_raid_remove_base_bdev", 00:04:55.527 "bdev_raid_add_base_bdev", 00:04:55.527 "bdev_raid_delete", 00:04:55.527 "bdev_raid_create", 00:04:55.527 "bdev_raid_get_bdevs", 00:04:55.527 "bdev_error_inject_error", 00:04:55.527 "bdev_error_delete", 00:04:55.527 "bdev_error_create", 00:04:55.527 "bdev_split_delete", 00:04:55.527 "bdev_split_create", 00:04:55.527 "bdev_delay_delete", 00:04:55.527 "bdev_delay_create", 00:04:55.527 "bdev_delay_update_latency", 00:04:55.527 "bdev_zone_block_delete", 00:04:55.527 "bdev_zone_block_create", 00:04:55.527 "blobfs_create", 00:04:55.527 "blobfs_detect", 00:04:55.527 "blobfs_set_cache_size", 00:04:55.527 "bdev_aio_delete", 00:04:55.527 "bdev_aio_rescan", 00:04:55.527 "bdev_aio_create", 00:04:55.527 "bdev_ftl_set_property", 00:04:55.527 "bdev_ftl_get_properties", 00:04:55.527 "bdev_ftl_get_stats", 00:04:55.527 "bdev_ftl_unmap", 00:04:55.527 "bdev_ftl_unload", 00:04:55.527 "bdev_ftl_delete", 00:04:55.527 "bdev_ftl_load", 00:04:55.527 "bdev_ftl_create", 00:04:55.527 "bdev_virtio_attach_controller", 00:04:55.527 "bdev_virtio_scsi_get_devices", 00:04:55.527 "bdev_virtio_detach_controller", 00:04:55.527 "bdev_virtio_blk_set_hotplug", 00:04:55.527 "bdev_iscsi_delete", 00:04:55.527 "bdev_iscsi_create", 00:04:55.527 "bdev_iscsi_set_options", 00:04:55.527 "accel_error_inject_error", 00:04:55.527 "ioat_scan_accel_module", 00:04:55.527 "dsa_scan_accel_module", 00:04:55.527 "iaa_scan_accel_module", 00:04:55.527 "vfu_virtio_create_scsi_endpoint", 00:04:55.527 "vfu_virtio_scsi_remove_target", 00:04:55.527 "vfu_virtio_scsi_add_target", 00:04:55.527 "vfu_virtio_create_blk_endpoint", 00:04:55.527 "vfu_virtio_delete_endpoint", 00:04:55.527 "keyring_file_remove_key", 00:04:55.527 "keyring_file_add_key", 00:04:55.527 "keyring_linux_set_options", 00:04:55.527 "iscsi_get_histogram", 00:04:55.527 "iscsi_enable_histogram", 00:04:55.527 "iscsi_set_options", 00:04:55.527 "iscsi_get_auth_groups", 00:04:55.527 "iscsi_auth_group_remove_secret", 00:04:55.527 "iscsi_auth_group_add_secret", 00:04:55.527 "iscsi_delete_auth_group", 00:04:55.527 "iscsi_create_auth_group", 00:04:55.527 "iscsi_set_discovery_auth", 00:04:55.527 "iscsi_get_options", 00:04:55.528 "iscsi_target_node_request_logout", 00:04:55.528 "iscsi_target_node_set_redirect", 00:04:55.528 "iscsi_target_node_set_auth", 00:04:55.528 "iscsi_target_node_add_lun", 00:04:55.528 "iscsi_get_stats", 00:04:55.528 "iscsi_get_connections", 00:04:55.528 "iscsi_portal_group_set_auth", 00:04:55.528 "iscsi_start_portal_group", 00:04:55.528 "iscsi_delete_portal_group", 00:04:55.528 "iscsi_create_portal_group", 00:04:55.528 "iscsi_get_portal_groups", 00:04:55.528 "iscsi_delete_target_node", 00:04:55.528 "iscsi_target_node_remove_pg_ig_maps", 00:04:55.528 "iscsi_target_node_add_pg_ig_maps", 00:04:55.528 "iscsi_create_target_node", 00:04:55.528 "iscsi_get_target_nodes", 00:04:55.528 "iscsi_delete_initiator_group", 00:04:55.528 "iscsi_initiator_group_remove_initiators", 00:04:55.528 "iscsi_initiator_group_add_initiators", 00:04:55.528 "iscsi_create_initiator_group", 00:04:55.528 "iscsi_get_initiator_groups", 00:04:55.528 "nvmf_set_crdt", 00:04:55.528 "nvmf_set_config", 00:04:55.528 "nvmf_set_max_subsystems", 00:04:55.528 "nvmf_stop_mdns_prr", 00:04:55.528 "nvmf_publish_mdns_prr", 00:04:55.528 "nvmf_subsystem_get_listeners", 00:04:55.528 "nvmf_subsystem_get_qpairs", 00:04:55.528 "nvmf_subsystem_get_controllers", 00:04:55.528 "nvmf_get_stats", 00:04:55.528 "nvmf_get_transports", 00:04:55.528 "nvmf_create_transport", 00:04:55.528 "nvmf_get_targets", 00:04:55.528 "nvmf_delete_target", 00:04:55.528 "nvmf_create_target", 00:04:55.528 "nvmf_subsystem_allow_any_host", 00:04:55.528 "nvmf_subsystem_remove_host", 00:04:55.528 "nvmf_subsystem_add_host", 00:04:55.528 "nvmf_ns_remove_host", 00:04:55.528 "nvmf_ns_add_host", 00:04:55.528 "nvmf_subsystem_remove_ns", 00:04:55.528 "nvmf_subsystem_add_ns", 00:04:55.528 "nvmf_subsystem_listener_set_ana_state", 00:04:55.528 "nvmf_discovery_get_referrals", 00:04:55.528 "nvmf_discovery_remove_referral", 00:04:55.528 "nvmf_discovery_add_referral", 00:04:55.528 "nvmf_subsystem_remove_listener", 00:04:55.528 "nvmf_subsystem_add_listener", 00:04:55.528 "nvmf_delete_subsystem", 00:04:55.528 "nvmf_create_subsystem", 00:04:55.528 "nvmf_get_subsystems", 00:04:55.528 "env_dpdk_get_mem_stats", 00:04:55.528 "nbd_get_disks", 00:04:55.528 "nbd_stop_disk", 00:04:55.528 "nbd_start_disk", 00:04:55.528 "ublk_recover_disk", 00:04:55.528 "ublk_get_disks", 00:04:55.528 "ublk_stop_disk", 00:04:55.528 "ublk_start_disk", 00:04:55.528 "ublk_destroy_target", 00:04:55.528 "ublk_create_target", 00:04:55.528 "virtio_blk_create_transport", 00:04:55.528 "virtio_blk_get_transports", 00:04:55.528 "vhost_controller_set_coalescing", 00:04:55.528 "vhost_get_controllers", 00:04:55.528 "vhost_delete_controller", 00:04:55.528 "vhost_create_blk_controller", 00:04:55.528 "vhost_scsi_controller_remove_target", 00:04:55.528 "vhost_scsi_controller_add_target", 00:04:55.528 "vhost_start_scsi_controller", 00:04:55.528 "vhost_create_scsi_controller", 00:04:55.528 "thread_set_cpumask", 00:04:55.528 "framework_get_governor", 00:04:55.528 "framework_get_scheduler", 00:04:55.528 "framework_set_scheduler", 00:04:55.528 "framework_get_reactors", 00:04:55.528 "thread_get_io_channels", 00:04:55.528 "thread_get_pollers", 00:04:55.528 "thread_get_stats", 00:04:55.528 "framework_monitor_context_switch", 00:04:55.528 "spdk_kill_instance", 00:04:55.528 "log_enable_timestamps", 00:04:55.528 "log_get_flags", 00:04:55.528 "log_clear_flag", 00:04:55.528 "log_set_flag", 00:04:55.528 "log_get_level", 00:04:55.528 "log_set_level", 00:04:55.528 "log_get_print_level", 00:04:55.528 "log_set_print_level", 00:04:55.528 "framework_enable_cpumask_locks", 00:04:55.528 "framework_disable_cpumask_locks", 00:04:55.528 "framework_wait_init", 00:04:55.528 "framework_start_init", 00:04:55.528 "scsi_get_devices", 00:04:55.528 "bdev_get_histogram", 00:04:55.528 "bdev_enable_histogram", 00:04:55.528 "bdev_set_qos_limit", 00:04:55.528 "bdev_set_qd_sampling_period", 00:04:55.528 "bdev_get_bdevs", 00:04:55.528 "bdev_reset_iostat", 00:04:55.528 "bdev_get_iostat", 00:04:55.528 "bdev_examine", 00:04:55.528 "bdev_wait_for_examine", 00:04:55.528 "bdev_set_options", 00:04:55.528 "notify_get_notifications", 00:04:55.528 "notify_get_types", 00:04:55.528 "accel_get_stats", 00:04:55.528 "accel_set_options", 00:04:55.528 "accel_set_driver", 00:04:55.528 "accel_crypto_key_destroy", 00:04:55.528 "accel_crypto_keys_get", 00:04:55.528 "accel_crypto_key_create", 00:04:55.528 "accel_assign_opc", 00:04:55.528 "accel_get_module_info", 00:04:55.528 "accel_get_opc_assignments", 00:04:55.528 "vmd_rescan", 00:04:55.528 "vmd_remove_device", 00:04:55.528 "vmd_enable", 00:04:55.528 "sock_get_default_impl", 00:04:55.528 "sock_set_default_impl", 00:04:55.528 "sock_impl_set_options", 00:04:55.528 "sock_impl_get_options", 00:04:55.528 "iobuf_get_stats", 00:04:55.528 "iobuf_set_options", 00:04:55.528 "keyring_get_keys", 00:04:55.528 "framework_get_pci_devices", 00:04:55.528 "framework_get_config", 00:04:55.528 "framework_get_subsystems", 00:04:55.528 "vfu_tgt_set_base_path", 00:04:55.528 "trace_get_info", 00:04:55.528 "trace_get_tpoint_group_mask", 00:04:55.528 "trace_disable_tpoint_group", 00:04:55.528 "trace_enable_tpoint_group", 00:04:55.528 "trace_clear_tpoint_mask", 00:04:55.528 "trace_set_tpoint_mask", 00:04:55.528 "spdk_get_version", 00:04:55.528 "rpc_get_methods" 00:04:55.528 ] 00:04:55.528 22:21:19 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:55.528 22:21:19 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.528 22:21:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.528 22:21:19 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:55.528 22:21:19 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4020158 00:04:55.528 22:21:19 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 4020158 ']' 00:04:55.528 22:21:19 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 4020158 00:04:55.528 22:21:19 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:55.528 22:21:19 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.528 22:21:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4020158 00:04:55.528 22:21:19 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.528 22:21:19 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.528 22:21:19 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4020158' 00:04:55.528 killing process with pid 4020158 00:04:55.528 22:21:19 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 4020158 00:04:55.528 22:21:19 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 4020158 00:04:56.097 00:04:56.097 real 0m1.486s 00:04:56.097 user 0m2.793s 00:04:56.097 sys 0m0.421s 00:04:56.097 22:21:19 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.097 22:21:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.097 ************************************ 00:04:56.097 END TEST spdkcli_tcp 00:04:56.097 ************************************ 00:04:56.097 22:21:19 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.097 22:21:19 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.097 22:21:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.097 22:21:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.097 22:21:19 -- common/autotest_common.sh@10 -- # set +x 00:04:56.097 ************************************ 00:04:56.097 START TEST dpdk_mem_utility 00:04:56.098 ************************************ 00:04:56.098 22:21:19 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.098 * Looking for test storage... 00:04:56.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:56.098 22:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:56.098 22:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4020583 00:04:56.098 22:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4020583 00:04:56.098 22:21:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.098 22:21:19 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 4020583 ']' 00:04:56.098 22:21:19 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.098 22:21:19 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.098 22:21:19 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.098 22:21:19 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.098 22:21:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.098 [2024-07-15 22:21:19.973914] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:56.098 [2024-07-15 22:21:19.973961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4020583 ] 00:04:56.098 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.098 [2024-07-15 22:21:20.030310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.357 [2024-07-15 22:21:20.108907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.926 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.926 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:56.926 22:21:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:56.926 22:21:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:56.926 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.926 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.926 { 00:04:56.926 "filename": "/tmp/spdk_mem_dump.txt" 00:04:56.926 } 00:04:56.926 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.926 22:21:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:56.926 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:56.926 1 heaps totaling size 814.000000 MiB 00:04:56.926 size: 814.000000 MiB heap id: 0 00:04:56.926 end heaps---------- 00:04:56.926 8 mempools totaling size 598.116089 MiB 00:04:56.926 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:56.926 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:56.926 size: 84.521057 MiB name: bdev_io_4020583 00:04:56.926 size: 51.011292 MiB name: evtpool_4020583 00:04:56.926 size: 50.003479 MiB name: msgpool_4020583 00:04:56.926 size: 21.763794 MiB name: PDU_Pool 00:04:56.926 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:56.926 size: 0.026123 MiB name: Session_Pool 00:04:56.926 end mempools------- 00:04:56.926 6 memzones totaling size 4.142822 MiB 00:04:56.926 size: 1.000366 MiB name: RG_ring_0_4020583 00:04:56.926 size: 1.000366 MiB name: RG_ring_1_4020583 00:04:56.926 size: 1.000366 MiB name: RG_ring_4_4020583 00:04:56.926 size: 1.000366 MiB name: RG_ring_5_4020583 00:04:56.926 size: 0.125366 MiB name: RG_ring_2_4020583 00:04:56.926 size: 0.015991 MiB name: RG_ring_3_4020583 00:04:56.926 end memzones------- 00:04:56.926 22:21:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:56.926 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:56.926 list of free elements. size: 12.519348 MiB 00:04:56.926 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:56.926 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:56.926 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:56.926 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:56.926 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:56.926 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:56.926 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:56.926 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:56.926 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:56.926 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:56.926 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:56.926 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:56.926 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:56.926 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:56.926 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:56.926 list of standard malloc elements. size: 199.218079 MiB 00:04:56.926 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:56.926 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:56.926 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:56.926 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:56.926 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:56.926 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:56.926 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:56.926 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:56.926 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:56.926 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:56.926 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:56.926 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:56.926 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:56.926 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:56.926 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:56.926 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:56.926 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:56.926 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:56.926 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:56.926 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:56.926 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:56.926 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:56.926 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:56.926 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:56.926 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:56.926 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:56.926 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:56.926 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:56.926 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:56.926 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:56.926 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:56.926 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:56.926 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:56.926 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:56.926 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:56.926 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:56.926 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:56.926 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:56.926 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:56.926 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:56.926 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:56.926 list of memzone associated elements. size: 602.262573 MiB 00:04:56.926 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:56.926 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:56.926 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:56.926 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:56.926 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:56.926 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_4020583_0 00:04:56.926 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:56.926 associated memzone info: size: 48.002930 MiB name: MP_evtpool_4020583_0 00:04:56.926 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:56.926 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4020583_0 00:04:56.926 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:56.926 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:56.926 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:56.926 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:56.926 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:56.926 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_4020583 00:04:56.926 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:56.926 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4020583 00:04:56.926 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:56.926 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4020583 00:04:56.926 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:56.926 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:56.926 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:56.926 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:56.926 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:56.926 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:56.926 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:56.926 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:56.926 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:56.926 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4020583 00:04:56.926 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:56.926 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4020583 00:04:56.926 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:56.926 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4020583 00:04:56.926 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:56.926 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4020583 00:04:56.926 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:56.926 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4020583 00:04:56.926 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:56.926 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:56.926 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:56.926 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:56.926 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:56.926 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:56.926 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:56.926 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4020583 00:04:56.926 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:56.926 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:56.926 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:56.926 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:56.926 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:56.926 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4020583 00:04:56.926 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:56.926 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:56.926 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:56.926 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4020583 00:04:56.926 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:56.926 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4020583 00:04:56.926 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:56.926 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:56.926 22:21:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:56.926 22:21:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4020583 00:04:56.926 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 4020583 ']' 00:04:56.926 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 4020583 00:04:56.926 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:56.926 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.926 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4020583 00:04:57.185 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.185 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.185 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4020583' 00:04:57.185 killing process with pid 4020583 00:04:57.185 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 4020583 00:04:57.185 22:21:20 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 4020583 00:04:57.452 00:04:57.452 real 0m1.407s 00:04:57.452 user 0m1.506s 00:04:57.452 sys 0m0.383s 00:04:57.452 22:21:21 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.452 22:21:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.452 ************************************ 00:04:57.452 END TEST dpdk_mem_utility 00:04:57.452 ************************************ 00:04:57.452 22:21:21 -- common/autotest_common.sh@1142 -- # return 0 00:04:57.452 22:21:21 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:57.452 22:21:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.452 22:21:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.452 22:21:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.452 ************************************ 00:04:57.452 START TEST event 00:04:57.452 ************************************ 00:04:57.452 22:21:21 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:57.452 * Looking for test storage... 00:04:57.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:57.452 22:21:21 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:57.452 22:21:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:57.452 22:21:21 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.452 22:21:21 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:57.452 22:21:21 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.452 22:21:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.748 ************************************ 00:04:57.748 START TEST event_perf 00:04:57.748 ************************************ 00:04:57.748 22:21:21 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.748 Running I/O for 1 seconds...[2024-07-15 22:21:21.446132] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:57.748 [2024-07-15 22:21:21.446190] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4020950 ] 00:04:57.748 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.748 [2024-07-15 22:21:21.504525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.748 [2024-07-15 22:21:21.580357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.748 [2024-07-15 22:21:21.580451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.748 [2024-07-15 22:21:21.580558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.748 [2024-07-15 22:21:21.580560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.687 Running I/O for 1 seconds... 00:04:58.687 lcore 0: 210280 00:04:58.687 lcore 1: 210281 00:04:58.687 lcore 2: 210279 00:04:58.687 lcore 3: 210279 00:04:58.687 done. 00:04:58.687 00:04:58.687 real 0m1.226s 00:04:58.687 user 0m4.145s 00:04:58.687 sys 0m0.080s 00:04:58.687 22:21:22 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.687 22:21:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.687 ************************************ 00:04:58.687 END TEST event_perf 00:04:58.687 ************************************ 00:04:58.946 22:21:22 event -- common/autotest_common.sh@1142 -- # return 0 00:04:58.946 22:21:22 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:58.946 22:21:22 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:58.946 22:21:22 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.946 22:21:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.946 ************************************ 00:04:58.946 START TEST event_reactor 00:04:58.946 ************************************ 00:04:58.946 22:21:22 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:58.946 [2024-07-15 22:21:22.739311] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:58.946 [2024-07-15 22:21:22.739386] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4021170 ] 00:04:58.946 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.946 [2024-07-15 22:21:22.797761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.946 [2024-07-15 22:21:22.869621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.319 test_start 00:05:00.319 oneshot 00:05:00.319 tick 100 00:05:00.319 tick 100 00:05:00.319 tick 250 00:05:00.319 tick 100 00:05:00.319 tick 100 00:05:00.319 tick 100 00:05:00.319 tick 250 00:05:00.319 tick 500 00:05:00.319 tick 100 00:05:00.319 tick 100 00:05:00.319 tick 250 00:05:00.319 tick 100 00:05:00.319 tick 100 00:05:00.319 test_end 00:05:00.319 00:05:00.319 real 0m1.219s 00:05:00.319 user 0m1.147s 00:05:00.319 sys 0m0.068s 00:05:00.319 22:21:23 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.319 22:21:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:00.319 ************************************ 00:05:00.319 END TEST event_reactor 00:05:00.319 ************************************ 00:05:00.319 22:21:23 event -- common/autotest_common.sh@1142 -- # return 0 00:05:00.319 22:21:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.319 22:21:23 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:00.319 22:21:23 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.319 22:21:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.319 ************************************ 00:05:00.319 START TEST event_reactor_perf 00:05:00.319 ************************************ 00:05:00.319 22:21:24 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.319 [2024-07-15 22:21:24.024668] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:00.319 [2024-07-15 22:21:24.024735] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4021366 ] 00:05:00.319 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.319 [2024-07-15 22:21:24.080860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.319 [2024-07-15 22:21:24.153310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.254 test_start 00:05:01.254 test_end 00:05:01.254 Performance: 508776 events per second 00:05:01.254 00:05:01.254 real 0m1.216s 00:05:01.254 user 0m1.144s 00:05:01.254 sys 0m0.069s 00:05:01.254 22:21:25 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.254 22:21:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.254 ************************************ 00:05:01.254 END TEST event_reactor_perf 00:05:01.254 ************************************ 00:05:01.513 22:21:25 event -- common/autotest_common.sh@1142 -- # return 0 00:05:01.513 22:21:25 event -- event/event.sh@49 -- # uname -s 00:05:01.513 22:21:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:01.513 22:21:25 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:01.513 22:21:25 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.513 22:21:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.513 22:21:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.513 ************************************ 00:05:01.513 START TEST event_scheduler 00:05:01.513 ************************************ 00:05:01.513 22:21:25 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:01.513 * Looking for test storage... 00:05:01.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:01.513 22:21:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:01.513 22:21:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4021634 00:05:01.513 22:21:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.513 22:21:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:01.513 22:21:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4021634 00:05:01.513 22:21:25 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 4021634 ']' 00:05:01.513 22:21:25 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.513 22:21:25 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.513 22:21:25 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.513 22:21:25 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.513 22:21:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.513 [2024-07-15 22:21:25.417313] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:01.513 [2024-07-15 22:21:25.417365] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4021634 ] 00:05:01.513 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.513 [2024-07-15 22:21:25.469516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:01.771 [2024-07-15 22:21:25.549766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.771 [2024-07-15 22:21:25.549864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.771 [2024-07-15 22:21:25.549947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:01.771 [2024-07-15 22:21:25.549949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.338 22:21:26 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.338 22:21:26 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:02.338 22:21:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:02.338 22:21:26 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.338 22:21:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.338 [2024-07-15 22:21:26.228312] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:02.338 [2024-07-15 22:21:26.228331] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:02.338 [2024-07-15 22:21:26.228340] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:02.338 [2024-07-15 22:21:26.228345] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:02.338 [2024-07-15 22:21:26.228350] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:02.338 22:21:26 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.338 22:21:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:02.338 22:21:26 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.338 22:21:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.338 [2024-07-15 22:21:26.300691] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:02.338 22:21:26 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.338 22:21:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:02.338 22:21:26 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.338 22:21:26 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.338 22:21:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 ************************************ 00:05:02.597 START TEST scheduler_create_thread 00:05:02.597 ************************************ 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 2 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 3 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 4 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 5 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 6 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 7 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 8 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.597 9 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:02.597 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.598 10 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.598 22:21:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.975 22:21:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.975 22:21:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:03.975 22:21:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:03.975 22:21:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.975 22:21:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.349 22:21:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.349 00:05:05.349 real 0m2.619s 00:05:05.349 user 0m0.027s 00:05:05.349 sys 0m0.001s 00:05:05.349 22:21:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.349 22:21:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.349 ************************************ 00:05:05.349 END TEST scheduler_create_thread 00:05:05.349 ************************************ 00:05:05.349 22:21:28 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:05.349 22:21:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:05.349 22:21:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4021634 00:05:05.349 22:21:28 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 4021634 ']' 00:05:05.349 22:21:28 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 4021634 00:05:05.349 22:21:28 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:05.349 22:21:28 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.349 22:21:28 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4021634 00:05:05.349 22:21:29 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:05.349 22:21:29 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:05.349 22:21:29 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4021634' 00:05:05.349 killing process with pid 4021634 00:05:05.349 22:21:29 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 4021634 00:05:05.349 22:21:29 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 4021634 00:05:05.609 [2024-07-15 22:21:29.434970] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:05.867 00:05:05.867 real 0m4.344s 00:05:05.867 user 0m8.208s 00:05:05.867 sys 0m0.357s 00:05:05.867 22:21:29 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.867 22:21:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.867 ************************************ 00:05:05.867 END TEST event_scheduler 00:05:05.867 ************************************ 00:05:05.867 22:21:29 event -- common/autotest_common.sh@1142 -- # return 0 00:05:05.867 22:21:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:05.867 22:21:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:05.867 22:21:29 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.867 22:21:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.867 22:21:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.867 ************************************ 00:05:05.867 START TEST app_repeat 00:05:05.867 ************************************ 00:05:05.867 22:21:29 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4022475 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4022475' 00:05:05.867 Process app_repeat pid: 4022475 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:05.867 spdk_app_start Round 0 00:05:05.867 22:21:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4022475 /var/tmp/spdk-nbd.sock 00:05:05.867 22:21:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4022475 ']' 00:05:05.867 22:21:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.867 22:21:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.867 22:21:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.867 22:21:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.867 22:21:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.867 [2024-07-15 22:21:29.736671] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:05.867 [2024-07-15 22:21:29.736723] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4022475 ] 00:05:05.867 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.867 [2024-07-15 22:21:29.793797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.125 [2024-07-15 22:21:29.868607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.125 [2024-07-15 22:21:29.868610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.692 22:21:30 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.692 22:21:30 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:06.692 22:21:30 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.951 Malloc0 00:05:06.951 22:21:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.951 Malloc1 00:05:07.210 22:21:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.210 22:21:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.210 /dev/nbd0 00:05:07.210 22:21:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.210 22:21:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.211 1+0 records in 00:05:07.211 1+0 records out 00:05:07.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000181414 s, 22.6 MB/s 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:07.211 22:21:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:07.211 22:21:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.211 22:21:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.211 22:21:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.469 /dev/nbd1 00:05:07.469 22:21:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.469 22:21:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.469 1+0 records in 00:05:07.469 1+0 records out 00:05:07.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182859 s, 22.4 MB/s 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:07.469 22:21:31 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:07.469 22:21:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.469 22:21:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.469 22:21:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.469 22:21:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.469 22:21:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.727 22:21:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.727 { 00:05:07.727 "nbd_device": "/dev/nbd0", 00:05:07.727 "bdev_name": "Malloc0" 00:05:07.727 }, 00:05:07.727 { 00:05:07.727 "nbd_device": "/dev/nbd1", 00:05:07.727 "bdev_name": "Malloc1" 00:05:07.727 } 00:05:07.727 ]' 00:05:07.727 22:21:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.727 { 00:05:07.727 "nbd_device": "/dev/nbd0", 00:05:07.727 "bdev_name": "Malloc0" 00:05:07.727 }, 00:05:07.727 { 00:05:07.727 "nbd_device": "/dev/nbd1", 00:05:07.727 "bdev_name": "Malloc1" 00:05:07.727 } 00:05:07.727 ]' 00:05:07.727 22:21:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.727 22:21:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.727 /dev/nbd1' 00:05:07.727 22:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.727 22:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.727 /dev/nbd1' 00:05:07.727 22:21:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.728 256+0 records in 00:05:07.728 256+0 records out 00:05:07.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043366 s, 242 MB/s 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.728 256+0 records in 00:05:07.728 256+0 records out 00:05:07.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140047 s, 74.9 MB/s 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.728 256+0 records in 00:05:07.728 256+0 records out 00:05:07.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146352 s, 71.6 MB/s 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.728 22:21:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.987 22:21:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.987 22:21:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.987 22:21:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.987 22:21:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.987 22:21:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.987 22:21:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.987 22:21:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.987 22:21:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.987 22:21:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.987 22:21:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:08.247 22:21:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.247 22:21:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.506 22:21:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.506 22:21:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.506 22:21:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.506 22:21:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.506 22:21:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.506 22:21:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.506 22:21:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.506 22:21:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.506 22:21:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.506 22:21:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.506 22:21:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.765 [2024-07-15 22:21:32.620058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.765 [2024-07-15 22:21:32.686592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.765 [2024-07-15 22:21:32.686595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.765 [2024-07-15 22:21:32.727379] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.765 [2024-07-15 22:21:32.727418] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.076 22:21:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.076 22:21:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:12.076 spdk_app_start Round 1 00:05:12.076 22:21:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4022475 /var/tmp/spdk-nbd.sock 00:05:12.076 22:21:35 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4022475 ']' 00:05:12.076 22:21:35 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.076 22:21:35 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.076 22:21:35 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.076 22:21:35 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.076 22:21:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.076 22:21:35 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.076 22:21:35 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:12.076 22:21:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.076 Malloc0 00:05:12.076 22:21:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.076 Malloc1 00:05:12.076 22:21:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.076 22:21:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.335 /dev/nbd0 00:05:12.335 22:21:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.335 22:21:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.335 22:21:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:12.335 22:21:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:12.336 22:21:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:12.336 22:21:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:12.336 22:21:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:12.336 22:21:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:12.336 22:21:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:12.336 22:21:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:12.336 22:21:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.336 1+0 records in 00:05:12.336 1+0 records out 00:05:12.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176027 s, 23.3 MB/s 00:05:12.336 22:21:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.336 22:21:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:12.336 22:21:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.336 22:21:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:12.336 22:21:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:12.336 22:21:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.336 22:21:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.336 22:21:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.612 /dev/nbd1 00:05:12.612 22:21:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.612 22:21:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.612 1+0 records in 00:05:12.612 1+0 records out 00:05:12.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188377 s, 21.7 MB/s 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:12.612 22:21:36 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:12.612 22:21:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.612 22:21:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.612 22:21:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.612 22:21:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.612 22:21:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.612 22:21:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.612 { 00:05:12.612 "nbd_device": "/dev/nbd0", 00:05:12.612 "bdev_name": "Malloc0" 00:05:12.612 }, 00:05:12.612 { 00:05:12.612 "nbd_device": "/dev/nbd1", 00:05:12.612 "bdev_name": "Malloc1" 00:05:12.612 } 00:05:12.612 ]' 00:05:12.612 22:21:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.612 { 00:05:12.612 "nbd_device": "/dev/nbd0", 00:05:12.612 "bdev_name": "Malloc0" 00:05:12.612 }, 00:05:12.612 { 00:05:12.612 "nbd_device": "/dev/nbd1", 00:05:12.612 "bdev_name": "Malloc1" 00:05:12.612 } 00:05:12.612 ]' 00:05:12.612 22:21:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.871 /dev/nbd1' 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.871 /dev/nbd1' 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.871 256+0 records in 00:05:12.871 256+0 records out 00:05:12.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00969659 s, 108 MB/s 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.871 256+0 records in 00:05:12.871 256+0 records out 00:05:12.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134063 s, 78.2 MB/s 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.871 256+0 records in 00:05:12.871 256+0 records out 00:05:12.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145989 s, 71.8 MB/s 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.871 22:21:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.129 22:21:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.129 22:21:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.129 22:21:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.129 22:21:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.129 22:21:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.129 22:21:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.129 22:21:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.129 22:21:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.129 22:21:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.129 22:21:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.129 22:21:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.129 22:21:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.129 22:21:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.129 22:21:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.129 22:21:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.129 22:21:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.129 22:21:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.129 22:21:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.129 22:21:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.129 22:21:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.129 22:21:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.388 22:21:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.388 22:21:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.388 22:21:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.388 22:21:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.388 22:21:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.388 22:21:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.388 22:21:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.388 22:21:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.388 22:21:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.388 22:21:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.388 22:21:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.388 22:21:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.388 22:21:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.646 22:21:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.905 [2024-07-15 22:21:37.632270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.905 [2024-07-15 22:21:37.700398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.905 [2024-07-15 22:21:37.700400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.905 [2024-07-15 22:21:37.742122] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.905 [2024-07-15 22:21:37.742164] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.488 22:21:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.488 22:21:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:16.488 spdk_app_start Round 2 00:05:16.488 22:21:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4022475 /var/tmp/spdk-nbd.sock 00:05:16.488 22:21:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4022475 ']' 00:05:16.488 22:21:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.488 22:21:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.488 22:21:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.488 22:21:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.488 22:21:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.747 22:21:40 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.747 22:21:40 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:16.747 22:21:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.006 Malloc0 00:05:17.006 22:21:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.006 Malloc1 00:05:17.265 22:21:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.265 22:21:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.265 22:21:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.266 22:21:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.266 /dev/nbd0 00:05:17.266 22:21:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.266 22:21:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.266 1+0 records in 00:05:17.266 1+0 records out 00:05:17.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176232 s, 23.2 MB/s 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.266 22:21:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:17.266 22:21:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.266 22:21:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.266 22:21:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.525 /dev/nbd1 00:05:17.525 22:21:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.525 22:21:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.525 1+0 records in 00:05:17.525 1+0 records out 00:05:17.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197986 s, 20.7 MB/s 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.525 22:21:41 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:17.525 22:21:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.525 22:21:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.525 22:21:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.525 22:21:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.525 22:21:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.784 { 00:05:17.784 "nbd_device": "/dev/nbd0", 00:05:17.784 "bdev_name": "Malloc0" 00:05:17.784 }, 00:05:17.784 { 00:05:17.784 "nbd_device": "/dev/nbd1", 00:05:17.784 "bdev_name": "Malloc1" 00:05:17.784 } 00:05:17.784 ]' 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.784 { 00:05:17.784 "nbd_device": "/dev/nbd0", 00:05:17.784 "bdev_name": "Malloc0" 00:05:17.784 }, 00:05:17.784 { 00:05:17.784 "nbd_device": "/dev/nbd1", 00:05:17.784 "bdev_name": "Malloc1" 00:05:17.784 } 00:05:17.784 ]' 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.784 /dev/nbd1' 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.784 /dev/nbd1' 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.784 256+0 records in 00:05:17.784 256+0 records out 00:05:17.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104047 s, 101 MB/s 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.784 256+0 records in 00:05:17.784 256+0 records out 00:05:17.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134826 s, 77.8 MB/s 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.784 256+0 records in 00:05:17.784 256+0 records out 00:05:17.784 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148198 s, 70.8 MB/s 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.784 22:21:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.043 22:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.043 22:21:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.043 22:21:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.043 22:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.043 22:21:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.043 22:21:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.043 22:21:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.043 22:21:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.043 22:21:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.043 22:21:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.302 22:21:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.561 22:21:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.561 22:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.561 22:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.561 22:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.561 22:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.561 22:21:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.561 22:21:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.561 22:21:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.561 22:21:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.561 22:21:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.561 22:21:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.820 [2024-07-15 22:21:42.664183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:18.820 [2024-07-15 22:21:42.734470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.820 [2024-07-15 22:21:42.734473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.820 [2024-07-15 22:21:42.775787] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:18.820 [2024-07-15 22:21:42.775826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.123 22:21:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4022475 /var/tmp/spdk-nbd.sock 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 4022475 ']' 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:22.123 22:21:45 event.app_repeat -- event/event.sh@39 -- # killprocess 4022475 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 4022475 ']' 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 4022475 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4022475 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4022475' 00:05:22.123 killing process with pid 4022475 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@967 -- # kill 4022475 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@972 -- # wait 4022475 00:05:22.123 spdk_app_start is called in Round 0. 00:05:22.123 Shutdown signal received, stop current app iteration 00:05:22.123 Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 reinitialization... 00:05:22.123 spdk_app_start is called in Round 1. 00:05:22.123 Shutdown signal received, stop current app iteration 00:05:22.123 Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 reinitialization... 00:05:22.123 spdk_app_start is called in Round 2. 00:05:22.123 Shutdown signal received, stop current app iteration 00:05:22.123 Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 reinitialization... 00:05:22.123 spdk_app_start is called in Round 3. 00:05:22.123 Shutdown signal received, stop current app iteration 00:05:22.123 22:21:45 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:22.123 22:21:45 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:22.123 00:05:22.123 real 0m16.158s 00:05:22.123 user 0m34.911s 00:05:22.123 sys 0m2.349s 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.123 22:21:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.123 ************************************ 00:05:22.123 END TEST app_repeat 00:05:22.123 ************************************ 00:05:22.123 22:21:45 event -- common/autotest_common.sh@1142 -- # return 0 00:05:22.123 22:21:45 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:22.123 22:21:45 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:22.123 22:21:45 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.123 22:21:45 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.123 22:21:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.123 ************************************ 00:05:22.123 START TEST cpu_locks 00:05:22.123 ************************************ 00:05:22.123 22:21:45 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:22.123 * Looking for test storage... 00:05:22.123 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:22.123 22:21:46 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:22.123 22:21:46 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:22.123 22:21:46 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:22.123 22:21:46 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:22.123 22:21:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.123 22:21:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.123 22:21:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.123 ************************************ 00:05:22.123 START TEST default_locks 00:05:22.123 ************************************ 00:05:22.123 22:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:22.123 22:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4025361 00:05:22.123 22:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4025361 00:05:22.123 22:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.123 22:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 4025361 ']' 00:05:22.123 22:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.123 22:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.123 22:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.123 22:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.123 22:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.123 [2024-07-15 22:21:46.084097] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:22.123 [2024-07-15 22:21:46.084146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4025361 ] 00:05:22.382 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.382 [2024-07-15 22:21:46.138486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.382 [2024-07-15 22:21:46.218627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.948 22:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.948 22:21:46 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:22.948 22:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4025361 00:05:22.948 22:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4025361 00:05:22.948 22:21:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.206 lslocks: write error 00:05:23.206 22:21:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4025361 00:05:23.206 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 4025361 ']' 00:05:23.206 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 4025361 00:05:23.206 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:23.206 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.206 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4025361 00:05:23.206 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.206 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.206 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4025361' 00:05:23.206 killing process with pid 4025361 00:05:23.206 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 4025361 00:05:23.206 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 4025361 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4025361 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4025361 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 4025361 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 4025361 ']' 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4025361) - No such process 00:05:23.464 ERROR: process (pid: 4025361) is no longer running 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:23.464 00:05:23.464 real 0m1.327s 00:05:23.464 user 0m1.398s 00:05:23.464 sys 0m0.399s 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.464 22:21:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.464 ************************************ 00:05:23.464 END TEST default_locks 00:05:23.464 ************************************ 00:05:23.464 22:21:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:23.464 22:21:47 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:23.464 22:21:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.464 22:21:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.464 22:21:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.464 ************************************ 00:05:23.464 START TEST default_locks_via_rpc 00:05:23.464 ************************************ 00:05:23.464 22:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:23.464 22:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4025630 00:05:23.464 22:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4025630 00:05:23.464 22:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4025630 ']' 00:05:23.464 22:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.464 22:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.464 22:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.464 22:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.464 22:21:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.464 22:21:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.735 [2024-07-15 22:21:47.468650] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:23.735 [2024-07-15 22:21:47.468689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4025630 ] 00:05:23.735 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.735 [2024-07-15 22:21:47.522872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.735 [2024-07-15 22:21:47.602814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.303 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4025630 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4025630 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4025630 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 4025630 ']' 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 4025630 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4025630 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4025630' 00:05:24.561 killing process with pid 4025630 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 4025630 00:05:24.561 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 4025630 00:05:24.818 00:05:24.818 real 0m1.335s 00:05:24.818 user 0m1.392s 00:05:24.818 sys 0m0.400s 00:05:24.818 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.818 22:21:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.818 ************************************ 00:05:24.819 END TEST default_locks_via_rpc 00:05:24.819 ************************************ 00:05:24.819 22:21:48 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:24.819 22:21:48 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:24.819 22:21:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.819 22:21:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.819 22:21:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.076 ************************************ 00:05:25.076 START TEST non_locking_app_on_locked_coremask 00:05:25.076 ************************************ 00:05:25.076 22:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:25.076 22:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4025900 00:05:25.076 22:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.076 22:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4025900 /var/tmp/spdk.sock 00:05:25.076 22:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4025900 ']' 00:05:25.076 22:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.076 22:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.076 22:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.076 22:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.076 22:21:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.076 [2024-07-15 22:21:48.861656] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:25.076 [2024-07-15 22:21:48.861693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4025900 ] 00:05:25.076 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.076 [2024-07-15 22:21:48.915143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.076 [2024-07-15 22:21:48.996295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.012 22:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.012 22:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:26.012 22:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4026001 00:05:26.012 22:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4026001 /var/tmp/spdk2.sock 00:05:26.012 22:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:26.012 22:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4026001 ']' 00:05:26.012 22:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.012 22:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.012 22:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.012 22:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.012 22:21:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.012 [2024-07-15 22:21:49.714638] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:26.012 [2024-07-15 22:21:49.714687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026001 ] 00:05:26.012 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.012 [2024-07-15 22:21:49.788244] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.012 [2024-07-15 22:21:49.788272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.012 [2024-07-15 22:21:49.940515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.578 22:21:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.578 22:21:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:26.578 22:21:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4025900 00:05:26.578 22:21:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4025900 00:05:26.578 22:21:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.144 lslocks: write error 00:05:27.144 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4025900 00:05:27.144 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4025900 ']' 00:05:27.144 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4025900 00:05:27.144 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:27.144 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.144 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4025900 00:05:27.144 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.144 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.144 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4025900' 00:05:27.144 killing process with pid 4025900 00:05:27.144 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4025900 00:05:27.144 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4025900 00:05:28.079 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4026001 00:05:28.079 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4026001 ']' 00:05:28.080 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4026001 00:05:28.080 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:28.080 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.080 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4026001 00:05:28.080 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.080 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.080 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4026001' 00:05:28.080 killing process with pid 4026001 00:05:28.080 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4026001 00:05:28.080 22:21:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4026001 00:05:28.338 00:05:28.338 real 0m3.259s 00:05:28.338 user 0m3.507s 00:05:28.338 sys 0m0.919s 00:05:28.338 22:21:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.338 22:21:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.338 ************************************ 00:05:28.338 END TEST non_locking_app_on_locked_coremask 00:05:28.338 ************************************ 00:05:28.338 22:21:52 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:28.338 22:21:52 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:28.338 22:21:52 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.338 22:21:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.338 22:21:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.338 ************************************ 00:05:28.338 START TEST locking_app_on_unlocked_coremask 00:05:28.338 ************************************ 00:05:28.338 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:28.338 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4026489 00:05:28.338 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4026489 /var/tmp/spdk.sock 00:05:28.338 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:28.338 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4026489 ']' 00:05:28.338 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.338 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.338 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.338 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.338 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.338 [2024-07-15 22:21:52.197125] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:28.338 [2024-07-15 22:21:52.197169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026489 ] 00:05:28.338 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.338 [2024-07-15 22:21:52.250150] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.338 [2024-07-15 22:21:52.250177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.596 [2024-07-15 22:21:52.318857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.164 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.164 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:29.164 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4026658 00:05:29.164 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4026658 /var/tmp/spdk2.sock 00:05:29.164 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.164 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4026658 ']' 00:05:29.164 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.164 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.164 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.164 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.164 22:21:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.164 [2024-07-15 22:21:53.043010] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:29.164 [2024-07-15 22:21:53.043061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4026658 ] 00:05:29.164 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.164 [2024-07-15 22:21:53.121077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.423 [2024-07-15 22:21:53.272088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.990 22:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.990 22:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:29.990 22:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4026658 00:05:29.990 22:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4026658 00:05:29.990 22:21:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.557 lslocks: write error 00:05:30.557 22:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4026489 00:05:30.557 22:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4026489 ']' 00:05:30.557 22:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 4026489 00:05:30.557 22:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:30.557 22:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.557 22:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4026489 00:05:30.557 22:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.557 22:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.557 22:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4026489' 00:05:30.557 killing process with pid 4026489 00:05:30.557 22:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 4026489 00:05:30.557 22:21:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 4026489 00:05:31.124 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4026658 00:05:31.124 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4026658 ']' 00:05:31.124 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 4026658 00:05:31.124 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:31.124 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.124 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4026658 00:05:31.413 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.413 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.413 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4026658' 00:05:31.413 killing process with pid 4026658 00:05:31.413 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 4026658 00:05:31.413 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 4026658 00:05:31.714 00:05:31.714 real 0m3.280s 00:05:31.714 user 0m3.510s 00:05:31.714 sys 0m0.943s 00:05:31.714 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.714 22:21:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.714 ************************************ 00:05:31.714 END TEST locking_app_on_unlocked_coremask 00:05:31.714 ************************************ 00:05:31.714 22:21:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:31.714 22:21:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:31.714 22:21:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.714 22:21:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.714 22:21:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.714 ************************************ 00:05:31.714 START TEST locking_app_on_locked_coremask 00:05:31.714 ************************************ 00:05:31.714 22:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:31.714 22:21:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4027001 00:05:31.714 22:21:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4027001 /var/tmp/spdk.sock 00:05:31.714 22:21:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.714 22:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4027001 ']' 00:05:31.714 22:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.714 22:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.714 22:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.714 22:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.714 22:21:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.714 [2024-07-15 22:21:55.545310] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:31.714 [2024-07-15 22:21:55.545358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027001 ] 00:05:31.714 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.714 [2024-07-15 22:21:55.599331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.714 [2024-07-15 22:21:55.674114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4027229 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4027229 /var/tmp/spdk2.sock 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4027229 /var/tmp/spdk2.sock 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4027229 /var/tmp/spdk2.sock 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 4027229 ']' 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.647 22:21:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.647 [2024-07-15 22:21:56.390326] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:32.647 [2024-07-15 22:21:56.390375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027229 ] 00:05:32.647 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.647 [2024-07-15 22:21:56.467103] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4027001 has claimed it. 00:05:32.647 [2024-07-15 22:21:56.467140] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:33.211 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4027229) - No such process 00:05:33.211 ERROR: process (pid: 4027229) is no longer running 00:05:33.211 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.211 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:33.211 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:33.211 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.211 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:33.211 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.211 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4027001 00:05:33.211 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4027001 00:05:33.211 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.469 lslocks: write error 00:05:33.469 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4027001 00:05:33.469 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 4027001 ']' 00:05:33.469 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 4027001 00:05:33.469 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:33.469 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.469 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4027001 00:05:33.469 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.469 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.469 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4027001' 00:05:33.469 killing process with pid 4027001 00:05:33.469 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 4027001 00:05:33.469 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 4027001 00:05:33.726 00:05:33.726 real 0m2.182s 00:05:33.726 user 0m2.416s 00:05:33.726 sys 0m0.581s 00:05:33.726 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.726 22:21:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.726 ************************************ 00:05:33.726 END TEST locking_app_on_locked_coremask 00:05:33.726 ************************************ 00:05:33.984 22:21:57 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:33.984 22:21:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:33.984 22:21:57 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.984 22:21:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.984 22:21:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.984 ************************************ 00:05:33.984 START TEST locking_overlapped_coremask 00:05:33.984 ************************************ 00:05:33.984 22:21:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:33.984 22:21:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4027492 00:05:33.984 22:21:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4027492 /var/tmp/spdk.sock 00:05:33.984 22:21:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:33.984 22:21:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 4027492 ']' 00:05:33.984 22:21:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.984 22:21:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.984 22:21:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.984 22:21:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.984 22:21:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.984 [2024-07-15 22:21:57.789895] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:33.984 [2024-07-15 22:21:57.789935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027492 ] 00:05:33.984 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.984 [2024-07-15 22:21:57.843430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.984 [2024-07-15 22:21:57.924615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.984 [2024-07-15 22:21:57.924710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.984 [2024-07-15 22:21:57.924711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4027704 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4027704 /var/tmp/spdk2.sock 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 4027704 /var/tmp/spdk2.sock 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 4027704 /var/tmp/spdk2.sock 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 4027704 ']' 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.919 22:21:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.919 [2024-07-15 22:21:58.643004] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:34.919 [2024-07-15 22:21:58.643052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027704 ] 00:05:34.919 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.919 [2024-07-15 22:21:58.721044] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4027492 has claimed it. 00:05:34.919 [2024-07-15 22:21:58.721080] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:35.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (4027704) - No such process 00:05:35.487 ERROR: process (pid: 4027704) is no longer running 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4027492 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 4027492 ']' 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 4027492 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4027492 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4027492' 00:05:35.487 killing process with pid 4027492 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 4027492 00:05:35.487 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 4027492 00:05:35.746 00:05:35.746 real 0m1.898s 00:05:35.746 user 0m5.370s 00:05:35.746 sys 0m0.397s 00:05:35.746 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.746 22:21:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.746 ************************************ 00:05:35.746 END TEST locking_overlapped_coremask 00:05:35.746 ************************************ 00:05:35.746 22:21:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:35.746 22:21:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:35.746 22:21:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.746 22:21:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.746 22:21:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.746 ************************************ 00:05:35.746 START TEST locking_overlapped_coremask_via_rpc 00:05:35.746 ************************************ 00:05:35.746 22:21:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:35.746 22:21:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4027782 00:05:35.746 22:21:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4027782 /var/tmp/spdk.sock 00:05:35.746 22:21:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:35.746 22:21:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4027782 ']' 00:05:35.746 22:21:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.746 22:21:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.746 22:21:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.746 22:21:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.746 22:21:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.004 [2024-07-15 22:21:59.748762] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:36.004 [2024-07-15 22:21:59.748804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027782 ] 00:05:36.004 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.004 [2024-07-15 22:21:59.802672] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.004 [2024-07-15 22:21:59.802695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.004 [2024-07-15 22:21:59.884553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.004 [2024-07-15 22:21:59.884648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.004 [2024-07-15 22:21:59.884651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.941 22:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.941 22:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:36.941 22:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4027992 00:05:36.941 22:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4027992 /var/tmp/spdk2.sock 00:05:36.941 22:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:36.941 22:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4027992 ']' 00:05:36.941 22:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.941 22:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.941 22:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.941 22:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.941 22:22:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.941 [2024-07-15 22:22:00.601901] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:36.941 [2024-07-15 22:22:00.601950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027992 ] 00:05:36.941 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.941 [2024-07-15 22:22:00.681335] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.941 [2024-07-15 22:22:00.681360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.941 [2024-07-15 22:22:00.828328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.941 [2024-07-15 22:22:00.832273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.941 [2024-07-15 22:22:00.832274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.507 [2024-07-15 22:22:01.416299] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4027782 has claimed it. 00:05:37.507 request: 00:05:37.507 { 00:05:37.507 "method": "framework_enable_cpumask_locks", 00:05:37.507 "req_id": 1 00:05:37.507 } 00:05:37.507 Got JSON-RPC error response 00:05:37.507 response: 00:05:37.507 { 00:05:37.507 "code": -32603, 00:05:37.507 "message": "Failed to claim CPU core: 2" 00:05:37.507 } 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4027782 /var/tmp/spdk.sock 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4027782 ']' 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.507 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.766 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.766 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:37.766 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4027992 /var/tmp/spdk2.sock 00:05:37.766 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 4027992 ']' 00:05:37.766 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.766 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.766 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.767 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.767 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.026 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.026 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:38.026 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:38.026 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.026 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.026 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.026 00:05:38.026 real 0m2.100s 00:05:38.026 user 0m0.874s 00:05:38.026 sys 0m0.162s 00:05:38.026 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.026 22:22:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.026 ************************************ 00:05:38.026 END TEST locking_overlapped_coremask_via_rpc 00:05:38.026 ************************************ 00:05:38.026 22:22:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:38.026 22:22:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:38.026 22:22:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4027782 ]] 00:05:38.026 22:22:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4027782 00:05:38.026 22:22:01 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4027782 ']' 00:05:38.026 22:22:01 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4027782 00:05:38.026 22:22:01 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:38.026 22:22:01 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.026 22:22:01 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4027782 00:05:38.026 22:22:01 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:38.026 22:22:01 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:38.026 22:22:01 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4027782' 00:05:38.026 killing process with pid 4027782 00:05:38.026 22:22:01 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 4027782 00:05:38.026 22:22:01 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 4027782 00:05:38.285 22:22:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4027992 ]] 00:05:38.285 22:22:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4027992 00:05:38.285 22:22:02 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4027992 ']' 00:05:38.285 22:22:02 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4027992 00:05:38.285 22:22:02 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:38.285 22:22:02 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:38.285 22:22:02 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4027992 00:05:38.285 22:22:02 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:38.285 22:22:02 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:38.285 22:22:02 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4027992' 00:05:38.285 killing process with pid 4027992 00:05:38.285 22:22:02 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 4027992 00:05:38.285 22:22:02 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 4027992 00:05:38.853 22:22:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:38.853 22:22:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:38.853 22:22:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4027782 ]] 00:05:38.853 22:22:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4027782 00:05:38.853 22:22:02 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4027782 ']' 00:05:38.853 22:22:02 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4027782 00:05:38.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4027782) - No such process 00:05:38.853 22:22:02 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 4027782 is not found' 00:05:38.853 Process with pid 4027782 is not found 00:05:38.853 22:22:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4027992 ]] 00:05:38.853 22:22:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4027992 00:05:38.853 22:22:02 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 4027992 ']' 00:05:38.853 22:22:02 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 4027992 00:05:38.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (4027992) - No such process 00:05:38.853 22:22:02 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 4027992 is not found' 00:05:38.853 Process with pid 4027992 is not found 00:05:38.853 22:22:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:38.853 00:05:38.853 real 0m16.642s 00:05:38.853 user 0m28.926s 00:05:38.853 sys 0m4.699s 00:05:38.853 22:22:02 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.853 22:22:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.853 ************************************ 00:05:38.853 END TEST cpu_locks 00:05:38.853 ************************************ 00:05:38.853 22:22:02 event -- common/autotest_common.sh@1142 -- # return 0 00:05:38.853 00:05:38.853 real 0m41.287s 00:05:38.853 user 1m18.663s 00:05:38.853 sys 0m7.953s 00:05:38.853 22:22:02 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.853 22:22:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.853 ************************************ 00:05:38.853 END TEST event 00:05:38.853 ************************************ 00:05:38.853 22:22:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:38.853 22:22:02 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:38.853 22:22:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.853 22:22:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.853 22:22:02 -- common/autotest_common.sh@10 -- # set +x 00:05:38.853 ************************************ 00:05:38.853 START TEST thread 00:05:38.853 ************************************ 00:05:38.853 22:22:02 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:38.853 * Looking for test storage... 00:05:38.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:38.853 22:22:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:38.853 22:22:02 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:38.853 22:22:02 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.853 22:22:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.853 ************************************ 00:05:38.853 START TEST thread_poller_perf 00:05:38.853 ************************************ 00:05:38.853 22:22:02 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:38.853 [2024-07-15 22:22:02.786705] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:38.853 [2024-07-15 22:22:02.786777] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028534 ] 00:05:38.853 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.112 [2024-07-15 22:22:02.844722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.112 [2024-07-15 22:22:02.917921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.112 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:40.047 ====================================== 00:05:40.047 busy:2309893790 (cyc) 00:05:40.047 total_run_count: 415000 00:05:40.047 tsc_hz: 2300000000 (cyc) 00:05:40.047 ====================================== 00:05:40.047 poller_cost: 5566 (cyc), 2420 (nsec) 00:05:40.047 00:05:40.047 real 0m1.227s 00:05:40.047 user 0m1.143s 00:05:40.047 sys 0m0.080s 00:05:40.047 22:22:03 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.047 22:22:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.047 ************************************ 00:05:40.047 END TEST thread_poller_perf 00:05:40.047 ************************************ 00:05:40.306 22:22:04 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:40.306 22:22:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:40.306 22:22:04 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:40.306 22:22:04 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.306 22:22:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.306 ************************************ 00:05:40.306 START TEST thread_poller_perf 00:05:40.306 ************************************ 00:05:40.306 22:22:04 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:40.306 [2024-07-15 22:22:04.080166] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:40.306 [2024-07-15 22:22:04.080241] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028718 ] 00:05:40.306 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.306 [2024-07-15 22:22:04.136696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.306 [2024-07-15 22:22:04.208331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.306 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:41.684 ====================================== 00:05:41.684 busy:2301461632 (cyc) 00:05:41.684 total_run_count: 5496000 00:05:41.684 tsc_hz: 2300000000 (cyc) 00:05:41.684 ====================================== 00:05:41.684 poller_cost: 418 (cyc), 181 (nsec) 00:05:41.684 00:05:41.684 real 0m1.218s 00:05:41.684 user 0m1.143s 00:05:41.684 sys 0m0.072s 00:05:41.684 22:22:05 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.684 22:22:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.684 ************************************ 00:05:41.684 END TEST thread_poller_perf 00:05:41.684 ************************************ 00:05:41.684 22:22:05 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:41.684 22:22:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:41.684 00:05:41.684 real 0m2.664s 00:05:41.684 user 0m2.364s 00:05:41.684 sys 0m0.309s 00:05:41.684 22:22:05 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.684 22:22:05 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.684 ************************************ 00:05:41.684 END TEST thread 00:05:41.684 ************************************ 00:05:41.684 22:22:05 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.684 22:22:05 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:41.684 22:22:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.684 22:22:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.684 22:22:05 -- common/autotest_common.sh@10 -- # set +x 00:05:41.684 ************************************ 00:05:41.684 START TEST accel 00:05:41.684 ************************************ 00:05:41.684 22:22:05 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:41.684 * Looking for test storage... 00:05:41.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:41.684 22:22:05 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:41.684 22:22:05 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:41.684 22:22:05 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.684 22:22:05 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=4029004 00:05:41.684 22:22:05 accel -- accel/accel.sh@63 -- # waitforlisten 4029004 00:05:41.684 22:22:05 accel -- common/autotest_common.sh@829 -- # '[' -z 4029004 ']' 00:05:41.684 22:22:05 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.684 22:22:05 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:41.684 22:22:05 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:41.684 22:22:05 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.684 22:22:05 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.684 22:22:05 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.684 22:22:05 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.684 22:22:05 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.684 22:22:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.684 22:22:05 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.684 22:22:05 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.684 22:22:05 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.684 22:22:05 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:41.684 22:22:05 accel -- accel/accel.sh@41 -- # jq -r . 00:05:41.684 [2024-07-15 22:22:05.495223] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:41.684 [2024-07-15 22:22:05.495282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029004 ] 00:05:41.684 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.684 [2024-07-15 22:22:05.550493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.684 [2024-07-15 22:22:05.631220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@862 -- # return 0 00:05:42.620 22:22:06 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:42.620 22:22:06 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:42.620 22:22:06 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:42.620 22:22:06 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:42.620 22:22:06 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:42.620 22:22:06 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.620 22:22:06 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.620 22:22:06 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.620 22:22:06 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.620 22:22:06 accel -- accel/accel.sh@75 -- # killprocess 4029004 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@948 -- # '[' -z 4029004 ']' 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@952 -- # kill -0 4029004 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@953 -- # uname 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4029004 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4029004' 00:05:42.620 killing process with pid 4029004 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@967 -- # kill 4029004 00:05:42.620 22:22:06 accel -- common/autotest_common.sh@972 -- # wait 4029004 00:05:42.879 22:22:06 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:42.879 22:22:06 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:42.879 22:22:06 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:42.879 22:22:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.879 22:22:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.879 22:22:06 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:42.879 22:22:06 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:42.879 22:22:06 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:42.879 22:22:06 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.879 22:22:06 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.879 22:22:06 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.879 22:22:06 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.879 22:22:06 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.879 22:22:06 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:42.879 22:22:06 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:42.879 22:22:06 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.879 22:22:06 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:42.879 22:22:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.879 22:22:06 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:42.879 22:22:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:42.879 22:22:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.879 22:22:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.879 ************************************ 00:05:42.879 START TEST accel_missing_filename 00:05:42.879 ************************************ 00:05:42.879 22:22:06 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:42.879 22:22:06 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:42.879 22:22:06 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:42.879 22:22:06 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:42.879 22:22:06 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.879 22:22:06 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:42.879 22:22:06 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.879 22:22:06 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:42.879 22:22:06 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:42.879 22:22:06 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:42.879 22:22:06 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.879 22:22:06 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.879 22:22:06 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.879 22:22:06 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.879 22:22:06 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.879 22:22:06 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:42.879 22:22:06 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:43.137 [2024-07-15 22:22:06.865076] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:43.137 [2024-07-15 22:22:06.865126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029298 ] 00:05:43.137 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.137 [2024-07-15 22:22:06.920936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.137 [2024-07-15 22:22:06.994328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.137 [2024-07-15 22:22:07.035182] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:43.137 [2024-07-15 22:22:07.094892] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:43.395 A filename is required. 00:05:43.395 22:22:07 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:43.395 22:22:07 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.395 22:22:07 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:43.395 22:22:07 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:43.395 22:22:07 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:43.395 22:22:07 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.395 00:05:43.395 real 0m0.330s 00:05:43.395 user 0m0.249s 00:05:43.395 sys 0m0.118s 00:05:43.395 22:22:07 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.395 22:22:07 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:43.395 ************************************ 00:05:43.395 END TEST accel_missing_filename 00:05:43.395 ************************************ 00:05:43.395 22:22:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.395 22:22:07 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:43.395 22:22:07 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:43.395 22:22:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.395 22:22:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.395 ************************************ 00:05:43.395 START TEST accel_compress_verify 00:05:43.395 ************************************ 00:05:43.395 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:43.395 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:43.395 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:43.395 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:43.395 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.395 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:43.395 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.395 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:43.395 22:22:07 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:43.395 22:22:07 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:43.395 22:22:07 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.395 22:22:07 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.395 22:22:07 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.395 22:22:07 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.395 22:22:07 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.395 22:22:07 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:43.395 22:22:07 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:43.395 [2024-07-15 22:22:07.265748] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:43.395 [2024-07-15 22:22:07.265817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029377 ] 00:05:43.395 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.395 [2024-07-15 22:22:07.322056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.653 [2024-07-15 22:22:07.397687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.653 [2024-07-15 22:22:07.438898] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:43.653 [2024-07-15 22:22:07.498971] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:43.653 00:05:43.653 Compression does not support the verify option, aborting. 00:05:43.653 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:43.653 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.653 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:43.654 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:43.654 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:43.654 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.654 00:05:43.654 real 0m0.335s 00:05:43.654 user 0m0.258s 00:05:43.654 sys 0m0.118s 00:05:43.654 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.654 22:22:07 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:43.654 ************************************ 00:05:43.654 END TEST accel_compress_verify 00:05:43.654 ************************************ 00:05:43.654 22:22:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.654 22:22:07 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:43.654 22:22:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:43.654 22:22:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.654 22:22:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.912 ************************************ 00:05:43.912 START TEST accel_wrong_workload 00:05:43.912 ************************************ 00:05:43.912 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:43.912 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:43.912 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:43.912 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:43.912 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.912 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:43.912 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.912 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:43.912 22:22:07 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:43.912 22:22:07 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:43.912 22:22:07 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.912 22:22:07 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.912 22:22:07 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.912 22:22:07 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.912 22:22:07 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.912 22:22:07 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:43.912 22:22:07 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:43.912 Unsupported workload type: foobar 00:05:43.912 [2024-07-15 22:22:07.667568] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:43.912 accel_perf options: 00:05:43.912 [-h help message] 00:05:43.912 [-q queue depth per core] 00:05:43.912 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:43.912 [-T number of threads per core 00:05:43.913 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:43.913 [-t time in seconds] 00:05:43.913 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:43.913 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:43.913 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:43.913 [-l for compress/decompress workloads, name of uncompressed input file 00:05:43.913 [-S for crc32c workload, use this seed value (default 0) 00:05:43.913 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:43.913 [-f for fill workload, use this BYTE value (default 255) 00:05:43.913 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:43.913 [-y verify result if this switch is on] 00:05:43.913 [-a tasks to allocate per core (default: same value as -q)] 00:05:43.913 Can be used to spread operations across a wider range of memory. 00:05:43.913 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:43.913 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.913 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:43.913 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.913 00:05:43.913 real 0m0.034s 00:05:43.913 user 0m0.046s 00:05:43.913 sys 0m0.015s 00:05:43.913 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.913 22:22:07 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:43.913 ************************************ 00:05:43.913 END TEST accel_wrong_workload 00:05:43.913 ************************************ 00:05:43.913 22:22:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.913 22:22:07 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:43.913 22:22:07 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:43.913 22:22:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.913 22:22:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.913 ************************************ 00:05:43.913 START TEST accel_negative_buffers 00:05:43.913 ************************************ 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:43.913 22:22:07 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:43.913 22:22:07 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:43.913 22:22:07 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.913 22:22:07 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.913 22:22:07 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.913 22:22:07 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.913 22:22:07 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.913 22:22:07 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:43.913 22:22:07 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:43.913 -x option must be non-negative. 00:05:43.913 [2024-07-15 22:22:07.766377] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:43.913 accel_perf options: 00:05:43.913 [-h help message] 00:05:43.913 [-q queue depth per core] 00:05:43.913 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:43.913 [-T number of threads per core 00:05:43.913 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:43.913 [-t time in seconds] 00:05:43.913 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:43.913 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:43.913 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:43.913 [-l for compress/decompress workloads, name of uncompressed input file 00:05:43.913 [-S for crc32c workload, use this seed value (default 0) 00:05:43.913 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:43.913 [-f for fill workload, use this BYTE value (default 255) 00:05:43.913 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:43.913 [-y verify result if this switch is on] 00:05:43.913 [-a tasks to allocate per core (default: same value as -q)] 00:05:43.913 Can be used to spread operations across a wider range of memory. 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.913 00:05:43.913 real 0m0.035s 00:05:43.913 user 0m0.024s 00:05:43.913 sys 0m0.011s 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.913 22:22:07 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:43.913 ************************************ 00:05:43.913 END TEST accel_negative_buffers 00:05:43.913 ************************************ 00:05:43.913 Error: writing output failed: Broken pipe 00:05:43.913 22:22:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.913 22:22:07 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:43.913 22:22:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:43.913 22:22:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.913 22:22:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.913 ************************************ 00:05:43.913 START TEST accel_crc32c 00:05:43.913 ************************************ 00:05:43.913 22:22:07 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:43.913 22:22:07 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:43.913 [2024-07-15 22:22:07.868399] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:43.913 [2024-07-15 22:22:07.868455] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029450 ] 00:05:44.173 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.173 [2024-07-15 22:22:07.926838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.173 [2024-07-15 22:22:08.008775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.173 22:22:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:45.560 22:22:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.560 00:05:45.560 real 0m1.347s 00:05:45.560 user 0m1.241s 00:05:45.560 sys 0m0.119s 00:05:45.560 22:22:09 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.560 22:22:09 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:45.560 ************************************ 00:05:45.560 END TEST accel_crc32c 00:05:45.560 ************************************ 00:05:45.560 22:22:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.560 22:22:09 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:45.560 22:22:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:45.560 22:22:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.560 22:22:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.560 ************************************ 00:05:45.560 START TEST accel_crc32c_C2 00:05:45.560 ************************************ 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:45.560 [2024-07-15 22:22:09.283033] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:45.560 [2024-07-15 22:22:09.283096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029697 ] 00:05:45.560 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.560 [2024-07-15 22:22:09.340399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.560 [2024-07-15 22:22:09.413939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.560 22:22:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.935 00:05:46.935 real 0m1.341s 00:05:46.935 user 0m1.233s 00:05:46.935 sys 0m0.120s 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.935 22:22:10 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:46.935 ************************************ 00:05:46.935 END TEST accel_crc32c_C2 00:05:46.935 ************************************ 00:05:46.935 22:22:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.935 22:22:10 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:46.935 22:22:10 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:46.935 22:22:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.935 22:22:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.935 ************************************ 00:05:46.935 START TEST accel_copy 00:05:46.935 ************************************ 00:05:46.935 22:22:10 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:46.935 22:22:10 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:46.935 22:22:10 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:46.935 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.935 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.935 22:22:10 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:46.935 22:22:10 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:46.935 22:22:10 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:46.935 22:22:10 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:46.936 [2024-07-15 22:22:10.690525] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:46.936 [2024-07-15 22:22:10.690592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029942 ] 00:05:46.936 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.936 [2024-07-15 22:22:10.746782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.936 [2024-07-15 22:22:10.821462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.936 22:22:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.351 22:22:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:48.351 22:22:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.351 00:05:48.351 real 0m1.340s 00:05:48.351 user 0m1.235s 00:05:48.351 sys 0m0.117s 00:05:48.351 22:22:12 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.351 22:22:12 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:48.351 ************************************ 00:05:48.351 END TEST accel_copy 00:05:48.351 ************************************ 00:05:48.351 22:22:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:48.351 22:22:12 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:48.351 22:22:12 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:48.351 22:22:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.351 22:22:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.351 ************************************ 00:05:48.351 START TEST accel_fill 00:05:48.351 ************************************ 00:05:48.351 22:22:12 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:48.351 [2024-07-15 22:22:12.096340] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:48.351 [2024-07-15 22:22:12.096390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030205 ] 00:05:48.351 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.351 [2024-07-15 22:22:12.152121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.351 [2024-07-15 22:22:12.226064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.351 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:48.352 22:22:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:49.731 22:22:13 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.731 00:05:49.731 real 0m1.338s 00:05:49.731 user 0m1.241s 00:05:49.731 sys 0m0.109s 00:05:49.731 22:22:13 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.731 22:22:13 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:49.731 ************************************ 00:05:49.731 END TEST accel_fill 00:05:49.731 ************************************ 00:05:49.731 22:22:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.731 22:22:13 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:49.731 22:22:13 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:49.731 22:22:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.731 22:22:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.731 ************************************ 00:05:49.731 START TEST accel_copy_crc32c 00:05:49.731 ************************************ 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:49.731 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:49.731 [2024-07-15 22:22:13.501668] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:49.731 [2024-07-15 22:22:13.501733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030465 ] 00:05:49.731 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.731 [2024-07-15 22:22:13.556770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.732 [2024-07-15 22:22:13.629532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.732 22:22:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.111 00:05:51.111 real 0m1.335s 00:05:51.111 user 0m1.236s 00:05:51.111 sys 0m0.116s 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.111 22:22:14 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:51.111 ************************************ 00:05:51.111 END TEST accel_copy_crc32c 00:05:51.111 ************************************ 00:05:51.111 22:22:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.111 22:22:14 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:51.111 22:22:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:51.111 22:22:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.111 22:22:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.111 ************************************ 00:05:51.111 START TEST accel_copy_crc32c_C2 00:05:51.111 ************************************ 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:51.111 22:22:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:51.112 [2024-07-15 22:22:14.904592] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:51.112 [2024-07-15 22:22:14.904658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030729 ] 00:05:51.112 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.112 [2024-07-15 22:22:14.960044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.112 [2024-07-15 22:22:15.033113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.112 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:51.372 22:22:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.310 00:05:52.310 real 0m1.335s 00:05:52.310 user 0m1.233s 00:05:52.310 sys 0m0.116s 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.310 22:22:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:52.310 ************************************ 00:05:52.310 END TEST accel_copy_crc32c_C2 00:05:52.310 ************************************ 00:05:52.310 22:22:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.310 22:22:16 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:52.310 22:22:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:52.310 22:22:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.310 22:22:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.569 ************************************ 00:05:52.569 START TEST accel_dualcast 00:05:52.569 ************************************ 00:05:52.569 22:22:16 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:52.569 22:22:16 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:52.569 22:22:16 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:52.569 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.569 22:22:16 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:52.569 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:52.570 [2024-07-15 22:22:16.307364] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:52.570 [2024-07-15 22:22:16.307418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030989 ] 00:05:52.570 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.570 [2024-07-15 22:22:16.363747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.570 [2024-07-15 22:22:16.438020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.570 22:22:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.950 22:22:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.951 22:22:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.951 22:22:17 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:53.951 22:22:17 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.951 00:05:53.951 real 0m1.338s 00:05:53.951 user 0m1.237s 00:05:53.951 sys 0m0.113s 00:05:53.951 22:22:17 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.951 22:22:17 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:53.951 ************************************ 00:05:53.951 END TEST accel_dualcast 00:05:53.951 ************************************ 00:05:53.951 22:22:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.951 22:22:17 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:53.951 22:22:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:53.951 22:22:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.951 22:22:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.951 ************************************ 00:05:53.951 START TEST accel_compare 00:05:53.951 ************************************ 00:05:53.951 22:22:17 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:53.951 [2024-07-15 22:22:17.714097] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:53.951 [2024-07-15 22:22:17.714164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031248 ] 00:05:53.951 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.951 [2024-07-15 22:22:17.769891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.951 [2024-07-15 22:22:17.842948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.951 22:22:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:55.331 22:22:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.331 00:05:55.331 real 0m1.338s 00:05:55.331 user 0m1.239s 00:05:55.331 sys 0m0.112s 00:05:55.331 22:22:19 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.331 22:22:19 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:55.331 ************************************ 00:05:55.331 END TEST accel_compare 00:05:55.331 ************************************ 00:05:55.331 22:22:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.331 22:22:19 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:55.331 22:22:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:55.331 22:22:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.331 22:22:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.331 ************************************ 00:05:55.331 START TEST accel_xor 00:05:55.331 ************************************ 00:05:55.331 22:22:19 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:55.331 [2024-07-15 22:22:19.119879] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:55.331 [2024-07-15 22:22:19.119928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031510 ] 00:05:55.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.331 [2024-07-15 22:22:19.174560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.331 [2024-07-15 22:22:19.248534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.331 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:55.591 22:22:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.531 00:05:56.531 real 0m1.338s 00:05:56.531 user 0m1.231s 00:05:56.531 sys 0m0.119s 00:05:56.531 22:22:20 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.531 22:22:20 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:56.531 ************************************ 00:05:56.531 END TEST accel_xor 00:05:56.531 ************************************ 00:05:56.531 22:22:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:56.531 22:22:20 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:56.531 22:22:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:56.531 22:22:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.531 22:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.531 ************************************ 00:05:56.531 START TEST accel_xor 00:05:56.531 ************************************ 00:05:56.531 22:22:20 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:56.531 22:22:20 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:56.789 22:22:20 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:56.789 22:22:20 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.789 22:22:20 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.789 22:22:20 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.789 22:22:20 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:56.790 [2024-07-15 22:22:20.524291] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:56.790 [2024-07-15 22:22:20.524339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031764 ] 00:05:56.790 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.790 [2024-07-15 22:22:20.579414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.790 [2024-07-15 22:22:20.654397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.790 22:22:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:58.169 22:22:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.169 00:05:58.169 real 0m1.339s 00:05:58.169 user 0m1.234s 00:05:58.169 sys 0m0.118s 00:05:58.169 22:22:21 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.169 22:22:21 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:58.169 ************************************ 00:05:58.169 END TEST accel_xor 00:05:58.169 ************************************ 00:05:58.169 22:22:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.169 22:22:21 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:58.169 22:22:21 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:58.169 22:22:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.169 22:22:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.169 ************************************ 00:05:58.169 START TEST accel_dif_verify 00:05:58.169 ************************************ 00:05:58.169 22:22:21 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:58.169 22:22:21 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:58.169 22:22:21 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:58.169 22:22:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.169 22:22:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.169 22:22:21 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:58.169 22:22:21 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:58.169 22:22:21 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:58.170 22:22:21 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.170 22:22:21 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.170 22:22:21 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.170 22:22:21 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.170 22:22:21 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.170 22:22:21 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:58.170 22:22:21 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:58.170 [2024-07-15 22:22:21.929746] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:58.170 [2024-07-15 22:22:21.929795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4032023 ] 00:05:58.170 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.170 [2024-07-15 22:22:21.984986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.170 [2024-07-15 22:22:22.057883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:58.170 22:22:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:59.546 22:22:23 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.546 00:05:59.546 real 0m1.338s 00:05:59.546 user 0m1.239s 00:05:59.546 sys 0m0.113s 00:05:59.546 22:22:23 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.546 22:22:23 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:59.546 ************************************ 00:05:59.546 END TEST accel_dif_verify 00:05:59.546 ************************************ 00:05:59.546 22:22:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:59.546 22:22:23 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:59.546 22:22:23 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:59.546 22:22:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.547 22:22:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.547 ************************************ 00:05:59.547 START TEST accel_dif_generate 00:05:59.547 ************************************ 00:05:59.547 22:22:23 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:59.547 [2024-07-15 22:22:23.333701] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:59.547 [2024-07-15 22:22:23.333771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4032287 ] 00:05:59.547 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.547 [2024-07-15 22:22:23.390409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.547 [2024-07-15 22:22:23.464388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.547 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.806 22:22:23 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:00.743 22:22:24 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.743 00:06:00.743 real 0m1.340s 00:06:00.743 user 0m1.241s 00:06:00.743 sys 0m0.114s 00:06:00.743 22:22:24 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.743 22:22:24 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:00.743 ************************************ 00:06:00.743 END TEST accel_dif_generate 00:06:00.743 ************************************ 00:06:00.743 22:22:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.743 22:22:24 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:00.743 22:22:24 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:00.743 22:22:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.743 22:22:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.743 ************************************ 00:06:00.743 START TEST accel_dif_generate_copy 00:06:00.743 ************************************ 00:06:00.743 22:22:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:00.743 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:00.743 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:00.743 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.743 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.743 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:01.003 [2024-07-15 22:22:24.737527] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:01.003 [2024-07-15 22:22:24.737605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4032560 ] 00:06:01.003 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.003 [2024-07-15 22:22:24.794992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.003 [2024-07-15 22:22:24.868636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.003 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.004 22:22:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.382 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.383 00:06:02.383 real 0m1.340s 00:06:02.383 user 0m1.230s 00:06:02.383 sys 0m0.123s 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.383 22:22:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:02.383 ************************************ 00:06:02.383 END TEST accel_dif_generate_copy 00:06:02.383 ************************************ 00:06:02.383 22:22:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.383 22:22:26 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:02.383 22:22:26 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.383 22:22:26 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:02.383 22:22:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.383 22:22:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.383 ************************************ 00:06:02.383 START TEST accel_comp 00:06:02.383 ************************************ 00:06:02.383 22:22:26 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:02.383 [2024-07-15 22:22:26.137746] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:02.383 [2024-07-15 22:22:26.137807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4032845 ] 00:06:02.383 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.383 [2024-07-15 22:22:26.192867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.383 [2024-07-15 22:22:26.267921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:02.383 22:22:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:03.761 22:22:27 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.761 00:06:03.761 real 0m1.345s 00:06:03.761 user 0m1.244s 00:06:03.761 sys 0m0.115s 00:06:03.761 22:22:27 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.761 22:22:27 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:03.761 ************************************ 00:06:03.761 END TEST accel_comp 00:06:03.761 ************************************ 00:06:03.761 22:22:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.761 22:22:27 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.761 22:22:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:03.761 22:22:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.761 22:22:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.761 ************************************ 00:06:03.761 START TEST accel_decomp 00:06:03.761 ************************************ 00:06:03.761 22:22:27 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:03.761 [2024-07-15 22:22:27.546426] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:03.761 [2024-07-15 22:22:27.546483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033110 ] 00:06:03.761 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.761 [2024-07-15 22:22:27.603085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.761 [2024-07-15 22:22:27.678562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.761 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.762 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.068 22:22:27 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.028 22:22:28 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.028 00:06:05.028 real 0m1.341s 00:06:05.028 user 0m1.237s 00:06:05.028 sys 0m0.118s 00:06:05.028 22:22:28 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.028 22:22:28 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:05.028 ************************************ 00:06:05.028 END TEST accel_decomp 00:06:05.028 ************************************ 00:06:05.028 22:22:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.028 22:22:28 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:05.028 22:22:28 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:05.028 22:22:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.028 22:22:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.028 ************************************ 00:06:05.028 START TEST accel_decomp_full 00:06:05.028 ************************************ 00:06:05.028 22:22:28 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:05.028 22:22:28 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:05.028 [2024-07-15 22:22:28.956076] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:05.028 [2024-07-15 22:22:28.956125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033375 ] 00:06:05.028 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.286 [2024-07-15 22:22:29.012038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.286 [2024-07-15 22:22:29.084614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.286 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.286 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.286 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.286 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.287 22:22:29 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.662 22:22:30 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.662 00:06:06.663 real 0m1.348s 00:06:06.663 user 0m1.251s 00:06:06.663 sys 0m0.111s 00:06:06.663 22:22:30 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.663 22:22:30 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:06.663 ************************************ 00:06:06.663 END TEST accel_decomp_full 00:06:06.663 ************************************ 00:06:06.663 22:22:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.663 22:22:30 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:06.663 22:22:30 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:06.663 22:22:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.663 22:22:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.663 ************************************ 00:06:06.663 START TEST accel_decomp_mcore 00:06:06.663 ************************************ 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:06.663 [2024-07-15 22:22:30.372208] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:06.663 [2024-07-15 22:22:30.372263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033646 ] 00:06:06.663 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.663 [2024-07-15 22:22:30.427260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.663 [2024-07-15 22:22:30.502484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.663 [2024-07-15 22:22:30.502593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.663 [2024-07-15 22:22:30.502700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.663 [2024-07-15 22:22:30.502707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:06.663 22:22:30 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.042 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.042 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.042 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.042 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.042 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.043 00:06:08.043 real 0m1.351s 00:06:08.043 user 0m4.574s 00:06:08.043 sys 0m0.124s 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.043 22:22:31 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:08.043 ************************************ 00:06:08.043 END TEST accel_decomp_mcore 00:06:08.043 ************************************ 00:06:08.043 22:22:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.043 22:22:31 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:08.043 22:22:31 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:08.043 22:22:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.043 22:22:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.043 ************************************ 00:06:08.043 START TEST accel_decomp_full_mcore 00:06:08.043 ************************************ 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:08.043 [2024-07-15 22:22:31.788779] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:08.043 [2024-07-15 22:22:31.788827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033910 ] 00:06:08.043 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.043 [2024-07-15 22:22:31.843613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.043 [2024-07-15 22:22:31.917840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.043 [2024-07-15 22:22:31.917933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.043 [2024-07-15 22:22:31.918013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.043 [2024-07-15 22:22:31.918015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.043 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.044 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.044 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.044 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.044 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.044 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.044 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.044 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.044 22:22:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.424 00:06:09.424 real 0m1.358s 00:06:09.424 user 0m4.611s 00:06:09.424 sys 0m0.116s 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.424 22:22:33 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:09.424 ************************************ 00:06:09.424 END TEST accel_decomp_full_mcore 00:06:09.424 ************************************ 00:06:09.424 22:22:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:09.424 22:22:33 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:09.424 22:22:33 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:09.424 22:22:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.424 22:22:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.424 ************************************ 00:06:09.424 START TEST accel_decomp_mthread 00:06:09.424 ************************************ 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:09.424 [2024-07-15 22:22:33.213933] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:09.424 [2024-07-15 22:22:33.213981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034165 ] 00:06:09.424 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.424 [2024-07-15 22:22:33.268620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.424 [2024-07-15 22:22:33.340719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.424 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.425 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.684 22:22:33 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.636 00:06:10.636 real 0m1.342s 00:06:10.636 user 0m1.241s 00:06:10.636 sys 0m0.114s 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.636 22:22:34 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:10.636 ************************************ 00:06:10.636 END TEST accel_decomp_mthread 00:06:10.636 ************************************ 00:06:10.636 22:22:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.636 22:22:34 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.636 22:22:34 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:10.636 22:22:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.636 22:22:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.636 ************************************ 00:06:10.636 START TEST accel_decomp_full_mthread 00:06:10.636 ************************************ 00:06:10.636 22:22:34 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.636 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:10.636 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:10.636 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.636 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.636 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.636 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.636 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:10.636 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.637 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.637 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.637 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.637 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.637 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:10.637 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:10.896 [2024-07-15 22:22:34.617768] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:10.896 [2024-07-15 22:22:34.617827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034414 ] 00:06:10.896 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.896 [2024-07-15 22:22:34.672975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.896 [2024-07-15 22:22:34.744370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.896 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.897 22:22:34 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.274 00:06:12.274 real 0m1.358s 00:06:12.274 user 0m1.257s 00:06:12.274 sys 0m0.114s 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.274 22:22:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:12.274 ************************************ 00:06:12.274 END TEST accel_decomp_full_mthread 00:06:12.274 ************************************ 00:06:12.274 22:22:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.274 22:22:35 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:12.275 22:22:35 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:12.275 22:22:35 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:12.275 22:22:35 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:12.275 22:22:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.275 22:22:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.275 22:22:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.275 22:22:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.275 22:22:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.275 22:22:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.275 22:22:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.275 22:22:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:12.275 22:22:35 accel -- accel/accel.sh@41 -- # jq -r . 00:06:12.275 ************************************ 00:06:12.275 START TEST accel_dif_functional_tests 00:06:12.275 ************************************ 00:06:12.275 22:22:36 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:12.275 [2024-07-15 22:22:36.058150] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:12.275 [2024-07-15 22:22:36.058182] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034666 ] 00:06:12.275 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.275 [2024-07-15 22:22:36.109420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.275 [2024-07-15 22:22:36.183034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.275 [2024-07-15 22:22:36.183131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.275 [2024-07-15 22:22:36.183121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.534 00:06:12.534 00:06:12.534 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.534 http://cunit.sourceforge.net/ 00:06:12.534 00:06:12.534 00:06:12.534 Suite: accel_dif 00:06:12.534 Test: verify: DIF generated, GUARD check ...passed 00:06:12.534 Test: verify: DIF generated, APPTAG check ...passed 00:06:12.534 Test: verify: DIF generated, REFTAG check ...passed 00:06:12.534 Test: verify: DIF not generated, GUARD check ...[2024-07-15 22:22:36.251787] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:12.534 passed 00:06:12.534 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 22:22:36.251833] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:12.534 passed 00:06:12.534 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 22:22:36.251868] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:12.534 passed 00:06:12.534 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:12.534 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 22:22:36.251910] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:12.534 passed 00:06:12.534 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:12.534 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:12.534 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:12.534 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 22:22:36.252010] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:12.534 passed 00:06:12.534 Test: verify copy: DIF generated, GUARD check ...passed 00:06:12.534 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:12.534 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:12.534 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 22:22:36.252114] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:12.534 passed 00:06:12.534 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 22:22:36.252137] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:12.534 passed 00:06:12.534 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 22:22:36.252156] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:12.534 passed 00:06:12.534 Test: generate copy: DIF generated, GUARD check ...passed 00:06:12.534 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:12.534 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:12.534 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:12.534 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:12.534 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:12.534 Test: generate copy: iovecs-len validate ...[2024-07-15 22:22:36.252329] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:12.534 passed 00:06:12.534 Test: generate copy: buffer alignment validate ...passed 00:06:12.534 00:06:12.534 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.534 suites 1 1 n/a 0 0 00:06:12.534 tests 26 26 26 0 0 00:06:12.534 asserts 115 115 115 0 n/a 00:06:12.534 00:06:12.534 Elapsed time = 0.002 seconds 00:06:12.534 00:06:12.534 real 0m0.401s 00:06:12.534 user 0m0.626s 00:06:12.534 sys 0m0.134s 00:06:12.534 22:22:36 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.534 22:22:36 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:12.534 ************************************ 00:06:12.534 END TEST accel_dif_functional_tests 00:06:12.534 ************************************ 00:06:12.534 22:22:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.534 00:06:12.534 real 0m31.097s 00:06:12.534 user 0m34.907s 00:06:12.534 sys 0m4.257s 00:06:12.534 22:22:36 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.534 22:22:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.534 ************************************ 00:06:12.534 END TEST accel 00:06:12.534 ************************************ 00:06:12.534 22:22:36 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.534 22:22:36 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:12.534 22:22:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.534 22:22:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.534 22:22:36 -- common/autotest_common.sh@10 -- # set +x 00:06:12.794 ************************************ 00:06:12.794 START TEST accel_rpc 00:06:12.794 ************************************ 00:06:12.794 22:22:36 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:12.794 * Looking for test storage... 00:06:12.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:12.794 22:22:36 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.794 22:22:36 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=4034735 00:06:12.794 22:22:36 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 4034735 00:06:12.794 22:22:36 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 4034735 ']' 00:06:12.794 22:22:36 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.794 22:22:36 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.794 22:22:36 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.794 22:22:36 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.794 22:22:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.794 22:22:36 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:12.794 [2024-07-15 22:22:36.660341] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:12.794 [2024-07-15 22:22:36.660387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034735 ] 00:06:12.794 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.794 [2024-07-15 22:22:36.715558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.053 [2024-07-15 22:22:36.795555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.622 22:22:37 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.622 22:22:37 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:13.622 22:22:37 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:13.622 22:22:37 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:13.622 22:22:37 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:13.622 22:22:37 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:13.622 22:22:37 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:13.622 22:22:37 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:13.622 22:22:37 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.622 22:22:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.622 ************************************ 00:06:13.622 START TEST accel_assign_opcode 00:06:13.622 ************************************ 00:06:13.622 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:13.622 22:22:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:13.622 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.622 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.622 [2024-07-15 22:22:37.489622] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:13.622 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.622 22:22:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:13.622 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.622 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.622 [2024-07-15 22:22:37.497636] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:13.622 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.622 22:22:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:13.622 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.622 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.882 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.882 22:22:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:13.882 22:22:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:13.882 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.882 22:22:37 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:13.882 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.882 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.882 software 00:06:13.882 00:06:13.882 real 0m0.236s 00:06:13.882 user 0m0.045s 00:06:13.882 sys 0m0.009s 00:06:13.882 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.882 22:22:37 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.882 ************************************ 00:06:13.882 END TEST accel_assign_opcode 00:06:13.882 ************************************ 00:06:13.882 22:22:37 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:13.882 22:22:37 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 4034735 00:06:13.882 22:22:37 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 4034735 ']' 00:06:13.882 22:22:37 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 4034735 00:06:13.882 22:22:37 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:13.882 22:22:37 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:13.882 22:22:37 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4034735 00:06:13.882 22:22:37 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:13.882 22:22:37 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:13.882 22:22:37 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4034735' 00:06:13.882 killing process with pid 4034735 00:06:13.882 22:22:37 accel_rpc -- common/autotest_common.sh@967 -- # kill 4034735 00:06:13.882 22:22:37 accel_rpc -- common/autotest_common.sh@972 -- # wait 4034735 00:06:14.141 00:06:14.141 real 0m1.582s 00:06:14.141 user 0m1.667s 00:06:14.141 sys 0m0.406s 00:06:14.141 22:22:38 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.141 22:22:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.141 ************************************ 00:06:14.141 END TEST accel_rpc 00:06:14.141 ************************************ 00:06:14.405 22:22:38 -- common/autotest_common.sh@1142 -- # return 0 00:06:14.405 22:22:38 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:14.405 22:22:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:14.405 22:22:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.405 22:22:38 -- common/autotest_common.sh@10 -- # set +x 00:06:14.405 ************************************ 00:06:14.405 START TEST app_cmdline 00:06:14.405 ************************************ 00:06:14.405 22:22:38 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:14.405 * Looking for test storage... 00:06:14.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:14.405 22:22:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:14.405 22:22:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4035041 00:06:14.405 22:22:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4035041 00:06:14.405 22:22:38 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:14.405 22:22:38 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 4035041 ']' 00:06:14.405 22:22:38 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.405 22:22:38 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.405 22:22:38 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.405 22:22:38 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.405 22:22:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.405 [2024-07-15 22:22:38.315528] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:14.405 [2024-07-15 22:22:38.315571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4035041 ] 00:06:14.405 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.405 [2024-07-15 22:22:38.369486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.666 [2024-07-15 22:22:38.449575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.235 22:22:39 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.235 22:22:39 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:15.235 22:22:39 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:15.494 { 00:06:15.494 "version": "SPDK v24.09-pre git sha1 f8598a71f", 00:06:15.494 "fields": { 00:06:15.494 "major": 24, 00:06:15.494 "minor": 9, 00:06:15.494 "patch": 0, 00:06:15.494 "suffix": "-pre", 00:06:15.494 "commit": "f8598a71f" 00:06:15.494 } 00:06:15.494 } 00:06:15.494 22:22:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:15.494 22:22:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:15.494 22:22:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:15.494 22:22:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:15.494 22:22:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:15.494 22:22:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:15.494 22:22:39 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:15.494 22:22:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:15.494 22:22:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:15.494 22:22:39 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:15.494 22:22:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:15.494 22:22:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:15.494 22:22:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:15.494 22:22:39 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:15.494 22:22:39 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:15.494 22:22:39 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.494 22:22:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.495 22:22:39 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.495 22:22:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.495 22:22:39 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.495 22:22:39 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:15.495 22:22:39 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:15.495 22:22:39 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:15.495 22:22:39 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:15.754 request: 00:06:15.754 { 00:06:15.754 "method": "env_dpdk_get_mem_stats", 00:06:15.754 "req_id": 1 00:06:15.754 } 00:06:15.754 Got JSON-RPC error response 00:06:15.754 response: 00:06:15.754 { 00:06:15.754 "code": -32601, 00:06:15.754 "message": "Method not found" 00:06:15.754 } 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:15.754 22:22:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4035041 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 4035041 ']' 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 4035041 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4035041 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4035041' 00:06:15.754 killing process with pid 4035041 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@967 -- # kill 4035041 00:06:15.754 22:22:39 app_cmdline -- common/autotest_common.sh@972 -- # wait 4035041 00:06:16.013 00:06:16.013 real 0m1.680s 00:06:16.013 user 0m2.003s 00:06:16.013 sys 0m0.436s 00:06:16.013 22:22:39 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.013 22:22:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.013 ************************************ 00:06:16.013 END TEST app_cmdline 00:06:16.013 ************************************ 00:06:16.013 22:22:39 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.013 22:22:39 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:16.013 22:22:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.013 22:22:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.013 22:22:39 -- common/autotest_common.sh@10 -- # set +x 00:06:16.013 ************************************ 00:06:16.013 START TEST version 00:06:16.013 ************************************ 00:06:16.013 22:22:39 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:16.273 * Looking for test storage... 00:06:16.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:16.273 22:22:40 version -- app/version.sh@17 -- # get_header_version major 00:06:16.273 22:22:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:16.273 22:22:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.273 22:22:40 version -- app/version.sh@14 -- # cut -f2 00:06:16.273 22:22:40 version -- app/version.sh@17 -- # major=24 00:06:16.273 22:22:40 version -- app/version.sh@18 -- # get_header_version minor 00:06:16.273 22:22:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:16.273 22:22:40 version -- app/version.sh@14 -- # cut -f2 00:06:16.273 22:22:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.273 22:22:40 version -- app/version.sh@18 -- # minor=9 00:06:16.273 22:22:40 version -- app/version.sh@19 -- # get_header_version patch 00:06:16.273 22:22:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:16.273 22:22:40 version -- app/version.sh@14 -- # cut -f2 00:06:16.273 22:22:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.273 22:22:40 version -- app/version.sh@19 -- # patch=0 00:06:16.273 22:22:40 version -- app/version.sh@20 -- # get_header_version suffix 00:06:16.273 22:22:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:16.273 22:22:40 version -- app/version.sh@14 -- # cut -f2 00:06:16.273 22:22:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:16.273 22:22:40 version -- app/version.sh@20 -- # suffix=-pre 00:06:16.273 22:22:40 version -- app/version.sh@22 -- # version=24.9 00:06:16.274 22:22:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:16.274 22:22:40 version -- app/version.sh@28 -- # version=24.9rc0 00:06:16.274 22:22:40 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:16.274 22:22:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:16.274 22:22:40 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:16.274 22:22:40 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:16.274 00:06:16.274 real 0m0.158s 00:06:16.274 user 0m0.076s 00:06:16.274 sys 0m0.118s 00:06:16.274 22:22:40 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.274 22:22:40 version -- common/autotest_common.sh@10 -- # set +x 00:06:16.274 ************************************ 00:06:16.274 END TEST version 00:06:16.274 ************************************ 00:06:16.274 22:22:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:16.274 22:22:40 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:16.274 22:22:40 -- spdk/autotest.sh@198 -- # uname -s 00:06:16.274 22:22:40 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:16.274 22:22:40 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:16.274 22:22:40 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:16.274 22:22:40 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:16.274 22:22:40 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:16.274 22:22:40 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:16.274 22:22:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.274 22:22:40 -- common/autotest_common.sh@10 -- # set +x 00:06:16.274 22:22:40 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:16.274 22:22:40 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:16.274 22:22:40 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:16.274 22:22:40 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:16.274 22:22:40 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:16.274 22:22:40 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:16.274 22:22:40 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:16.274 22:22:40 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:16.274 22:22:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.274 22:22:40 -- common/autotest_common.sh@10 -- # set +x 00:06:16.274 ************************************ 00:06:16.274 START TEST nvmf_tcp 00:06:16.274 ************************************ 00:06:16.274 22:22:40 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:16.534 * Looking for test storage... 00:06:16.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.534 22:22:40 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.534 22:22:40 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.534 22:22:40 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.534 22:22:40 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.534 22:22:40 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.534 22:22:40 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.534 22:22:40 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:16.534 22:22:40 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:16.534 22:22:40 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.534 22:22:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:16.534 22:22:40 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:16.534 22:22:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:16.534 22:22:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.534 22:22:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.534 ************************************ 00:06:16.534 START TEST nvmf_example 00:06:16.534 ************************************ 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:16.534 * Looking for test storage... 00:06:16.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:16.534 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:16.535 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.535 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:16.535 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:16.535 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:16.535 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.535 22:22:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:16.535 22:22:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.535 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:16.535 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:16.535 22:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:16.535 22:22:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:21.806 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:21.806 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:21.806 Found net devices under 0000:86:00.0: cvl_0_0 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:21.806 Found net devices under 0000:86:00.1: cvl_0_1 00:06:21.806 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:21.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:21.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:06:21.807 00:06:21.807 --- 10.0.0.2 ping statistics --- 00:06:21.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.807 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:21.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:21.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:06:21.807 00:06:21.807 --- 10.0.0.1 ping statistics --- 00:06:21.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.807 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=4038653 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 4038653 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 4038653 ']' 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.807 22:22:45 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.067 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.638 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.638 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:22.638 22:22:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:22.638 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.638 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:22.923 22:22:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:22.923 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.897 Initializing NVMe Controllers 00:06:32.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:32.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:32.897 Initialization complete. Launching workers. 00:06:32.897 ======================================================== 00:06:32.897 Latency(us) 00:06:32.897 Device Information : IOPS MiB/s Average min max 00:06:32.897 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17854.60 69.74 3584.37 713.02 15843.92 00:06:32.897 ======================================================== 00:06:32.897 Total : 17854.60 69.74 3584.37 713.02 15843.92 00:06:32.897 00:06:32.897 22:22:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:32.897 22:22:56 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:32.897 22:22:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:32.897 22:22:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:32.897 22:22:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:32.897 22:22:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:32.897 22:22:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:32.897 22:22:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:32.897 rmmod nvme_tcp 00:06:33.157 rmmod nvme_fabrics 00:06:33.157 rmmod nvme_keyring 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 4038653 ']' 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 4038653 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 4038653 ']' 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 4038653 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4038653 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4038653' 00:06:33.157 killing process with pid 4038653 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 4038653 00:06:33.157 22:22:56 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 4038653 00:06:33.157 nvmf threads initialize successfully 00:06:33.157 bdev subsystem init successfully 00:06:33.157 created a nvmf target service 00:06:33.157 create targets's poll groups done 00:06:33.157 all subsystems of target started 00:06:33.157 nvmf target is running 00:06:33.157 all subsystems of target stopped 00:06:33.157 destroy targets's poll groups done 00:06:33.157 destroyed the nvmf target service 00:06:33.157 bdev subsystem finish successfully 00:06:33.157 nvmf threads destroy successfully 00:06:33.417 22:22:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:33.417 22:22:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:33.417 22:22:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:33.417 22:22:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:33.417 22:22:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:33.417 22:22:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.417 22:22:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:33.417 22:22:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.323 22:22:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:35.323 22:22:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:35.323 22:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:35.323 22:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:35.323 00:06:35.323 real 0m18.901s 00:06:35.323 user 0m45.371s 00:06:35.323 sys 0m5.365s 00:06:35.323 22:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.323 22:22:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:35.323 ************************************ 00:06:35.323 END TEST nvmf_example 00:06:35.323 ************************************ 00:06:35.323 22:22:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:35.323 22:22:59 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:35.323 22:22:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:35.323 22:22:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.323 22:22:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.585 ************************************ 00:06:35.585 START TEST nvmf_filesystem 00:06:35.585 ************************************ 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:35.585 * Looking for test storage... 00:06:35.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:35.585 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:35.585 #define SPDK_CONFIG_H 00:06:35.585 #define SPDK_CONFIG_APPS 1 00:06:35.585 #define SPDK_CONFIG_ARCH native 00:06:35.585 #undef SPDK_CONFIG_ASAN 00:06:35.585 #undef SPDK_CONFIG_AVAHI 00:06:35.585 #undef SPDK_CONFIG_CET 00:06:35.585 #define SPDK_CONFIG_COVERAGE 1 00:06:35.585 #define SPDK_CONFIG_CROSS_PREFIX 00:06:35.585 #undef SPDK_CONFIG_CRYPTO 00:06:35.585 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:35.585 #undef SPDK_CONFIG_CUSTOMOCF 00:06:35.585 #undef SPDK_CONFIG_DAOS 00:06:35.585 #define SPDK_CONFIG_DAOS_DIR 00:06:35.585 #define SPDK_CONFIG_DEBUG 1 00:06:35.585 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:35.585 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:35.586 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:35.586 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:35.586 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:35.586 #undef SPDK_CONFIG_DPDK_UADK 00:06:35.586 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:35.586 #define SPDK_CONFIG_EXAMPLES 1 00:06:35.586 #undef SPDK_CONFIG_FC 00:06:35.586 #define SPDK_CONFIG_FC_PATH 00:06:35.586 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:35.586 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:35.586 #undef SPDK_CONFIG_FUSE 00:06:35.586 #undef SPDK_CONFIG_FUZZER 00:06:35.586 #define SPDK_CONFIG_FUZZER_LIB 00:06:35.586 #undef SPDK_CONFIG_GOLANG 00:06:35.586 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:35.586 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:35.586 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:35.586 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:35.586 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:35.586 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:35.586 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:35.586 #define SPDK_CONFIG_IDXD 1 00:06:35.586 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:35.586 #undef SPDK_CONFIG_IPSEC_MB 00:06:35.586 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:35.586 #define SPDK_CONFIG_ISAL 1 00:06:35.586 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:35.586 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:35.586 #define SPDK_CONFIG_LIBDIR 00:06:35.586 #undef SPDK_CONFIG_LTO 00:06:35.586 #define SPDK_CONFIG_MAX_LCORES 128 00:06:35.586 #define SPDK_CONFIG_NVME_CUSE 1 00:06:35.586 #undef SPDK_CONFIG_OCF 00:06:35.586 #define SPDK_CONFIG_OCF_PATH 00:06:35.586 #define SPDK_CONFIG_OPENSSL_PATH 00:06:35.586 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:35.586 #define SPDK_CONFIG_PGO_DIR 00:06:35.586 #undef SPDK_CONFIG_PGO_USE 00:06:35.586 #define SPDK_CONFIG_PREFIX /usr/local 00:06:35.586 #undef SPDK_CONFIG_RAID5F 00:06:35.586 #undef SPDK_CONFIG_RBD 00:06:35.586 #define SPDK_CONFIG_RDMA 1 00:06:35.586 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:35.586 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:35.586 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:35.586 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:35.586 #define SPDK_CONFIG_SHARED 1 00:06:35.586 #undef SPDK_CONFIG_SMA 00:06:35.586 #define SPDK_CONFIG_TESTS 1 00:06:35.586 #undef SPDK_CONFIG_TSAN 00:06:35.586 #define SPDK_CONFIG_UBLK 1 00:06:35.586 #define SPDK_CONFIG_UBSAN 1 00:06:35.586 #undef SPDK_CONFIG_UNIT_TESTS 00:06:35.586 #undef SPDK_CONFIG_URING 00:06:35.586 #define SPDK_CONFIG_URING_PATH 00:06:35.586 #undef SPDK_CONFIG_URING_ZNS 00:06:35.586 #undef SPDK_CONFIG_USDT 00:06:35.586 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:35.586 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:35.586 #define SPDK_CONFIG_VFIO_USER 1 00:06:35.586 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:35.586 #define SPDK_CONFIG_VHOST 1 00:06:35.586 #define SPDK_CONFIG_VIRTIO 1 00:06:35.586 #undef SPDK_CONFIG_VTUNE 00:06:35.586 #define SPDK_CONFIG_VTUNE_DIR 00:06:35.586 #define SPDK_CONFIG_WERROR 1 00:06:35.586 #define SPDK_CONFIG_WPDK_DIR 00:06:35.586 #undef SPDK_CONFIG_XNVME 00:06:35.586 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:35.586 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:35.587 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 4041067 ]] 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 4041067 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.7ZbUC3 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.7ZbUC3/tests/target /tmp/spdk.7ZbUC3 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=189604016128 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974299648 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6370283520 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97983774720 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39185485824 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194861568 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9375744 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97986277376 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987149824 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=872448 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:35.588 * Looking for test storage... 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=189604016128 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8584876032 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:35.588 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.589 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:35.848 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:35.849 22:22:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:41.125 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:41.126 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:41.126 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:41.126 Found net devices under 0000:86:00.0: cvl_0_0 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:41.126 Found net devices under 0000:86:00.1: cvl_0_1 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:41.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:41.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:06:41.126 00:06:41.126 --- 10.0.0.2 ping statistics --- 00:06:41.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.126 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:41.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:41.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:06:41.126 00:06:41.126 --- 10.0.0.1 ping statistics --- 00:06:41.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:41.126 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:41.126 ************************************ 00:06:41.126 START TEST nvmf_filesystem_no_in_capsule 00:06:41.126 ************************************ 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4044031 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4044031 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 4044031 ']' 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:41.126 22:23:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.127 [2024-07-15 22:23:04.859191] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:41.127 [2024-07-15 22:23:04.859244] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.127 [2024-07-15 22:23:04.918137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.127 [2024-07-15 22:23:05.002563] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:41.127 [2024-07-15 22:23:05.002598] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:41.127 [2024-07-15 22:23:05.002605] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:41.127 [2024-07-15 22:23:05.002611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:41.127 [2024-07-15 22:23:05.002617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:41.127 [2024-07-15 22:23:05.002690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.127 [2024-07-15 22:23:05.002712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.127 [2024-07-15 22:23:05.002735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.127 [2024-07-15 22:23:05.002736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.694 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.694 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:41.694 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:41.694 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:41.694 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.953 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:41.953 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:41.953 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:41.953 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.953 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.953 [2024-07-15 22:23:05.697206] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.953 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.953 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:41.953 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.953 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.953 Malloc1 00:06:41.953 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.954 [2024-07-15 22:23:05.844158] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:41.954 { 00:06:41.954 "name": "Malloc1", 00:06:41.954 "aliases": [ 00:06:41.954 "04761ebe-2193-4b8b-b153-7cff4bf45844" 00:06:41.954 ], 00:06:41.954 "product_name": "Malloc disk", 00:06:41.954 "block_size": 512, 00:06:41.954 "num_blocks": 1048576, 00:06:41.954 "uuid": "04761ebe-2193-4b8b-b153-7cff4bf45844", 00:06:41.954 "assigned_rate_limits": { 00:06:41.954 "rw_ios_per_sec": 0, 00:06:41.954 "rw_mbytes_per_sec": 0, 00:06:41.954 "r_mbytes_per_sec": 0, 00:06:41.954 "w_mbytes_per_sec": 0 00:06:41.954 }, 00:06:41.954 "claimed": true, 00:06:41.954 "claim_type": "exclusive_write", 00:06:41.954 "zoned": false, 00:06:41.954 "supported_io_types": { 00:06:41.954 "read": true, 00:06:41.954 "write": true, 00:06:41.954 "unmap": true, 00:06:41.954 "flush": true, 00:06:41.954 "reset": true, 00:06:41.954 "nvme_admin": false, 00:06:41.954 "nvme_io": false, 00:06:41.954 "nvme_io_md": false, 00:06:41.954 "write_zeroes": true, 00:06:41.954 "zcopy": true, 00:06:41.954 "get_zone_info": false, 00:06:41.954 "zone_management": false, 00:06:41.954 "zone_append": false, 00:06:41.954 "compare": false, 00:06:41.954 "compare_and_write": false, 00:06:41.954 "abort": true, 00:06:41.954 "seek_hole": false, 00:06:41.954 "seek_data": false, 00:06:41.954 "copy": true, 00:06:41.954 "nvme_iov_md": false 00:06:41.954 }, 00:06:41.954 "memory_domains": [ 00:06:41.954 { 00:06:41.954 "dma_device_id": "system", 00:06:41.954 "dma_device_type": 1 00:06:41.954 }, 00:06:41.954 { 00:06:41.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.954 "dma_device_type": 2 00:06:41.954 } 00:06:41.954 ], 00:06:41.954 "driver_specific": {} 00:06:41.954 } 00:06:41.954 ]' 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:41.954 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:42.212 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:42.212 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:42.212 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:42.212 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:42.212 22:23:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:43.149 22:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:43.149 22:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:43.149 22:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:43.149 22:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:43.149 22:23:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:45.682 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:45.683 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:45.683 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:45.683 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:45.683 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:45.683 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:45.940 22:23:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:46.873 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:46.873 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:46.873 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:46.873 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.873 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.131 ************************************ 00:06:47.131 START TEST filesystem_ext4 00:06:47.131 ************************************ 00:06:47.131 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:47.131 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:47.131 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:47.131 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:47.131 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:47.131 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:47.131 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:47.131 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:47.131 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:47.131 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:47.131 22:23:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:47.131 mke2fs 1.46.5 (30-Dec-2021) 00:06:47.131 Discarding device blocks: 0/522240 done 00:06:47.131 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:47.131 Filesystem UUID: 4cc25777-63b4-45e2-aaa0-da98753900cd 00:06:47.131 Superblock backups stored on blocks: 00:06:47.131 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:47.131 00:06:47.131 Allocating group tables: 0/64 done 00:06:47.131 Writing inode tables: 0/64 done 00:06:47.389 Creating journal (8192 blocks): done 00:06:48.324 Writing superblocks and filesystem accounting information: 0/64 done 00:06:48.324 00:06:48.324 22:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:48.324 22:23:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 4044031 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:48.324 00:06:48.324 real 0m1.409s 00:06:48.324 user 0m0.032s 00:06:48.324 sys 0m0.059s 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:48.324 ************************************ 00:06:48.324 END TEST filesystem_ext4 00:06:48.324 ************************************ 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:48.324 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:48.618 ************************************ 00:06:48.618 START TEST filesystem_btrfs 00:06:48.618 ************************************ 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:48.618 btrfs-progs v6.6.2 00:06:48.618 See https://btrfs.readthedocs.io for more information. 00:06:48.618 00:06:48.618 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:48.618 NOTE: several default settings have changed in version 5.15, please make sure 00:06:48.618 this does not affect your deployments: 00:06:48.618 - DUP for metadata (-m dup) 00:06:48.618 - enabled no-holes (-O no-holes) 00:06:48.618 - enabled free-space-tree (-R free-space-tree) 00:06:48.618 00:06:48.618 Label: (null) 00:06:48.618 UUID: 62b7ec82-c39b-4458-b429-90ed664a8ad2 00:06:48.618 Node size: 16384 00:06:48.618 Sector size: 4096 00:06:48.618 Filesystem size: 510.00MiB 00:06:48.618 Block group profiles: 00:06:48.618 Data: single 8.00MiB 00:06:48.618 Metadata: DUP 32.00MiB 00:06:48.618 System: DUP 8.00MiB 00:06:48.618 SSD detected: yes 00:06:48.618 Zoned device: no 00:06:48.618 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:48.618 Runtime features: free-space-tree 00:06:48.618 Checksum: crc32c 00:06:48.618 Number of devices: 1 00:06:48.618 Devices: 00:06:48.618 ID SIZE PATH 00:06:48.618 1 510.00MiB /dev/nvme0n1p1 00:06:48.618 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:48.618 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:48.877 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:48.877 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:48.877 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:48.877 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 4044031 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:49.135 00:06:49.135 real 0m0.565s 00:06:49.135 user 0m0.027s 00:06:49.135 sys 0m0.126s 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 ************************************ 00:06:49.135 END TEST filesystem_btrfs 00:06:49.135 ************************************ 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 ************************************ 00:06:49.135 START TEST filesystem_xfs 00:06:49.135 ************************************ 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:49.135 22:23:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:49.135 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:49.135 = sectsz=512 attr=2, projid32bit=1 00:06:49.135 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:49.135 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:49.135 data = bsize=4096 blocks=130560, imaxpct=25 00:06:49.135 = sunit=0 swidth=0 blks 00:06:49.135 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:49.135 log =internal log bsize=4096 blocks=16384, version=2 00:06:49.135 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:49.135 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:50.074 Discarding blocks...Done. 00:06:50.074 22:23:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:50.074 22:23:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:52.608 22:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:52.608 22:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:52.608 22:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:52.608 22:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:52.608 22:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:52.608 22:23:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 4044031 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:52.608 00:06:52.608 real 0m3.073s 00:06:52.608 user 0m0.023s 00:06:52.608 sys 0m0.074s 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:52.608 ************************************ 00:06:52.608 END TEST filesystem_xfs 00:06:52.608 ************************************ 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:52.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 4044031 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 4044031 ']' 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 4044031 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4044031 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4044031' 00:06:52.608 killing process with pid 4044031 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 4044031 00:06:52.608 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 4044031 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:53.174 00:06:53.174 real 0m12.095s 00:06:53.174 user 0m47.475s 00:06:53.174 sys 0m1.213s 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.174 ************************************ 00:06:53.174 END TEST nvmf_filesystem_no_in_capsule 00:06:53.174 ************************************ 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:53.174 ************************************ 00:06:53.174 START TEST nvmf_filesystem_in_capsule 00:06:53.174 ************************************ 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=4046179 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 4046179 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 4046179 ']' 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.174 22:23:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:53.174 [2024-07-15 22:23:17.022660] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:53.174 [2024-07-15 22:23:17.022698] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.174 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.174 [2024-07-15 22:23:17.078894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.432 [2024-07-15 22:23:17.159541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.432 [2024-07-15 22:23:17.159578] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.432 [2024-07-15 22:23:17.159585] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.432 [2024-07-15 22:23:17.159591] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.432 [2024-07-15 22:23:17.159597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.433 [2024-07-15 22:23:17.159644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.433 [2024-07-15 22:23:17.159668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.433 [2024-07-15 22:23:17.159757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.433 [2024-07-15 22:23:17.159759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.000 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.000 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:54.000 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:54.000 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:54.000 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.000 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.000 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:54.000 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:54.000 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.000 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.000 [2024-07-15 22:23:17.873272] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.001 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.001 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:54.001 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.001 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.259 Malloc1 00:06:54.259 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.259 22:23:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.259 [2024-07-15 22:23:18.022816] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:54.259 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:54.260 { 00:06:54.260 "name": "Malloc1", 00:06:54.260 "aliases": [ 00:06:54.260 "96260631-06a2-4850-8ba3-8a2ef6093161" 00:06:54.260 ], 00:06:54.260 "product_name": "Malloc disk", 00:06:54.260 "block_size": 512, 00:06:54.260 "num_blocks": 1048576, 00:06:54.260 "uuid": "96260631-06a2-4850-8ba3-8a2ef6093161", 00:06:54.260 "assigned_rate_limits": { 00:06:54.260 "rw_ios_per_sec": 0, 00:06:54.260 "rw_mbytes_per_sec": 0, 00:06:54.260 "r_mbytes_per_sec": 0, 00:06:54.260 "w_mbytes_per_sec": 0 00:06:54.260 }, 00:06:54.260 "claimed": true, 00:06:54.260 "claim_type": "exclusive_write", 00:06:54.260 "zoned": false, 00:06:54.260 "supported_io_types": { 00:06:54.260 "read": true, 00:06:54.260 "write": true, 00:06:54.260 "unmap": true, 00:06:54.260 "flush": true, 00:06:54.260 "reset": true, 00:06:54.260 "nvme_admin": false, 00:06:54.260 "nvme_io": false, 00:06:54.260 "nvme_io_md": false, 00:06:54.260 "write_zeroes": true, 00:06:54.260 "zcopy": true, 00:06:54.260 "get_zone_info": false, 00:06:54.260 "zone_management": false, 00:06:54.260 "zone_append": false, 00:06:54.260 "compare": false, 00:06:54.260 "compare_and_write": false, 00:06:54.260 "abort": true, 00:06:54.260 "seek_hole": false, 00:06:54.260 "seek_data": false, 00:06:54.260 "copy": true, 00:06:54.260 "nvme_iov_md": false 00:06:54.260 }, 00:06:54.260 "memory_domains": [ 00:06:54.260 { 00:06:54.260 "dma_device_id": "system", 00:06:54.260 "dma_device_type": 1 00:06:54.260 }, 00:06:54.260 { 00:06:54.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.260 "dma_device_type": 2 00:06:54.260 } 00:06:54.260 ], 00:06:54.260 "driver_specific": {} 00:06:54.260 } 00:06:54.260 ]' 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:54.260 22:23:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:55.636 22:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:55.636 22:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:55.636 22:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:55.636 22:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:55.636 22:23:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:57.541 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:57.800 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:57.800 22:23:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:59.179 ************************************ 00:06:59.179 START TEST filesystem_in_capsule_ext4 00:06:59.179 ************************************ 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:59.179 22:23:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:59.179 mke2fs 1.46.5 (30-Dec-2021) 00:06:59.179 Discarding device blocks: 0/522240 done 00:06:59.179 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:59.179 Filesystem UUID: 037de505-8c14-455d-9e51-99c370ec8603 00:06:59.179 Superblock backups stored on blocks: 00:06:59.179 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:59.179 00:06:59.179 Allocating group tables: 0/64 done 00:06:59.179 Writing inode tables: 0/64 done 00:06:59.179 Creating journal (8192 blocks): done 00:06:59.179 Writing superblocks and filesystem accounting information: 0/64 done 00:06:59.179 00:06:59.179 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:59.179 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 4046179 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:00.114 00:07:00.114 real 0m1.071s 00:07:00.114 user 0m0.023s 00:07:00.114 sys 0m0.065s 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:00.114 ************************************ 00:07:00.114 END TEST filesystem_in_capsule_ext4 00:07:00.114 ************************************ 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.114 ************************************ 00:07:00.114 START TEST filesystem_in_capsule_btrfs 00:07:00.114 ************************************ 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:00.114 22:23:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:00.373 btrfs-progs v6.6.2 00:07:00.373 See https://btrfs.readthedocs.io for more information. 00:07:00.373 00:07:00.373 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:00.373 NOTE: several default settings have changed in version 5.15, please make sure 00:07:00.373 this does not affect your deployments: 00:07:00.373 - DUP for metadata (-m dup) 00:07:00.373 - enabled no-holes (-O no-holes) 00:07:00.373 - enabled free-space-tree (-R free-space-tree) 00:07:00.373 00:07:00.373 Label: (null) 00:07:00.373 UUID: 84c69c24-9118-468d-b1a1-1758d677209a 00:07:00.373 Node size: 16384 00:07:00.373 Sector size: 4096 00:07:00.373 Filesystem size: 510.00MiB 00:07:00.373 Block group profiles: 00:07:00.373 Data: single 8.00MiB 00:07:00.373 Metadata: DUP 32.00MiB 00:07:00.373 System: DUP 8.00MiB 00:07:00.373 SSD detected: yes 00:07:00.373 Zoned device: no 00:07:00.373 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:00.373 Runtime features: free-space-tree 00:07:00.373 Checksum: crc32c 00:07:00.373 Number of devices: 1 00:07:00.373 Devices: 00:07:00.373 ID SIZE PATH 00:07:00.373 1 510.00MiB /dev/nvme0n1p1 00:07:00.373 00:07:00.373 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:00.373 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 4046179 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:00.633 00:07:00.633 real 0m0.596s 00:07:00.633 user 0m0.030s 00:07:00.633 sys 0m0.120s 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:00.633 ************************************ 00:07:00.633 END TEST filesystem_in_capsule_btrfs 00:07:00.633 ************************************ 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.633 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:00.893 ************************************ 00:07:00.893 START TEST filesystem_in_capsule_xfs 00:07:00.893 ************************************ 00:07:00.893 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:00.893 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:00.893 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:00.893 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:00.893 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:00.893 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:00.893 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:00.893 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:00.893 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:00.893 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:00.893 22:23:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:00.893 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:00.893 = sectsz=512 attr=2, projid32bit=1 00:07:00.893 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:00.893 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:00.893 data = bsize=4096 blocks=130560, imaxpct=25 00:07:00.893 = sunit=0 swidth=0 blks 00:07:00.893 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:00.893 log =internal log bsize=4096 blocks=16384, version=2 00:07:00.893 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:00.893 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:01.831 Discarding blocks...Done. 00:07:01.831 22:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:01.831 22:23:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 4046179 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:03.736 00:07:03.736 real 0m2.811s 00:07:03.736 user 0m0.023s 00:07:03.736 sys 0m0.071s 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:03.736 ************************************ 00:07:03.736 END TEST filesystem_in_capsule_xfs 00:07:03.736 ************************************ 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:03.736 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:03.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 4046179 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 4046179 ']' 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 4046179 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4046179 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4046179' 00:07:03.996 killing process with pid 4046179 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 4046179 00:07:03.996 22:23:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 4046179 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:04.564 00:07:04.564 real 0m11.328s 00:07:04.564 user 0m44.441s 00:07:04.564 sys 0m1.202s 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:04.564 ************************************ 00:07:04.564 END TEST nvmf_filesystem_in_capsule 00:07:04.564 ************************************ 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:04.564 rmmod nvme_tcp 00:07:04.564 rmmod nvme_fabrics 00:07:04.564 rmmod nvme_keyring 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:04.564 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:04.565 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:04.565 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:04.565 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:04.565 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:04.565 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:04.565 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:04.565 22:23:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.565 22:23:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:04.565 22:23:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.137 22:23:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:07.137 00:07:07.137 real 0m31.149s 00:07:07.137 user 1m33.515s 00:07:07.137 sys 0m6.536s 00:07:07.137 22:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.137 22:23:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.137 ************************************ 00:07:07.137 END TEST nvmf_filesystem 00:07:07.137 ************************************ 00:07:07.137 22:23:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:07.137 22:23:30 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:07.137 22:23:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:07.137 22:23:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.137 22:23:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.137 ************************************ 00:07:07.137 START TEST nvmf_target_discovery 00:07:07.137 ************************************ 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:07.137 * Looking for test storage... 00:07:07.137 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:07.137 22:23:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:07.138 22:23:30 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:12.436 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:12.437 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:12.437 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:12.437 Found net devices under 0000:86:00.0: cvl_0_0 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:12.437 Found net devices under 0000:86:00.1: cvl_0_1 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:12.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:12.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:07:12.437 00:07:12.437 --- 10.0.0.2 ping statistics --- 00:07:12.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.437 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:07:12.437 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:12.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:12.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:07:12.437 00:07:12.438 --- 10.0.0.1 ping statistics --- 00:07:12.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:12.438 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=4051741 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 4051741 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 4051741 ']' 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.438 22:23:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:12.438 [2024-07-15 22:23:35.881241] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:07:12.438 [2024-07-15 22:23:35.881283] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:12.438 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.438 [2024-07-15 22:23:35.938021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.438 [2024-07-15 22:23:36.010453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:12.438 [2024-07-15 22:23:36.010507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:12.438 [2024-07-15 22:23:36.010514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:12.438 [2024-07-15 22:23:36.010519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:12.438 [2024-07-15 22:23:36.010524] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:12.438 [2024-07-15 22:23:36.010635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.438 [2024-07-15 22:23:36.010658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.438 [2024-07-15 22:23:36.010729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.438 [2024-07-15 22:23:36.010730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.006 [2024-07-15 22:23:36.725401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.006 Null1 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.006 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 [2024-07-15 22:23:36.770926] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 Null2 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 Null3 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 Null4 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.007 22:23:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:13.267 00:07:13.267 Discovery Log Number of Records 6, Generation counter 6 00:07:13.267 =====Discovery Log Entry 0====== 00:07:13.267 trtype: tcp 00:07:13.267 adrfam: ipv4 00:07:13.267 subtype: current discovery subsystem 00:07:13.267 treq: not required 00:07:13.267 portid: 0 00:07:13.267 trsvcid: 4420 00:07:13.267 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:13.267 traddr: 10.0.0.2 00:07:13.267 eflags: explicit discovery connections, duplicate discovery information 00:07:13.267 sectype: none 00:07:13.267 =====Discovery Log Entry 1====== 00:07:13.267 trtype: tcp 00:07:13.267 adrfam: ipv4 00:07:13.267 subtype: nvme subsystem 00:07:13.267 treq: not required 00:07:13.267 portid: 0 00:07:13.267 trsvcid: 4420 00:07:13.267 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:13.267 traddr: 10.0.0.2 00:07:13.267 eflags: none 00:07:13.267 sectype: none 00:07:13.267 =====Discovery Log Entry 2====== 00:07:13.267 trtype: tcp 00:07:13.267 adrfam: ipv4 00:07:13.267 subtype: nvme subsystem 00:07:13.267 treq: not required 00:07:13.267 portid: 0 00:07:13.267 trsvcid: 4420 00:07:13.267 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:13.267 traddr: 10.0.0.2 00:07:13.267 eflags: none 00:07:13.267 sectype: none 00:07:13.267 =====Discovery Log Entry 3====== 00:07:13.267 trtype: tcp 00:07:13.267 adrfam: ipv4 00:07:13.267 subtype: nvme subsystem 00:07:13.267 treq: not required 00:07:13.267 portid: 0 00:07:13.267 trsvcid: 4420 00:07:13.267 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:13.267 traddr: 10.0.0.2 00:07:13.267 eflags: none 00:07:13.267 sectype: none 00:07:13.267 =====Discovery Log Entry 4====== 00:07:13.267 trtype: tcp 00:07:13.267 adrfam: ipv4 00:07:13.267 subtype: nvme subsystem 00:07:13.267 treq: not required 00:07:13.267 portid: 0 00:07:13.267 trsvcid: 4420 00:07:13.267 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:13.267 traddr: 10.0.0.2 00:07:13.267 eflags: none 00:07:13.267 sectype: none 00:07:13.267 =====Discovery Log Entry 5====== 00:07:13.267 trtype: tcp 00:07:13.267 adrfam: ipv4 00:07:13.267 subtype: discovery subsystem referral 00:07:13.267 treq: not required 00:07:13.267 portid: 0 00:07:13.267 trsvcid: 4430 00:07:13.267 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:13.267 traddr: 10.0.0.2 00:07:13.267 eflags: none 00:07:13.267 sectype: none 00:07:13.267 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:13.267 Perform nvmf subsystem discovery via RPC 00:07:13.267 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:13.267 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.267 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 [ 00:07:13.268 { 00:07:13.268 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:13.268 "subtype": "Discovery", 00:07:13.268 "listen_addresses": [ 00:07:13.268 { 00:07:13.268 "trtype": "TCP", 00:07:13.268 "adrfam": "IPv4", 00:07:13.268 "traddr": "10.0.0.2", 00:07:13.268 "trsvcid": "4420" 00:07:13.268 } 00:07:13.268 ], 00:07:13.268 "allow_any_host": true, 00:07:13.268 "hosts": [] 00:07:13.268 }, 00:07:13.268 { 00:07:13.268 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:13.268 "subtype": "NVMe", 00:07:13.268 "listen_addresses": [ 00:07:13.268 { 00:07:13.268 "trtype": "TCP", 00:07:13.268 "adrfam": "IPv4", 00:07:13.268 "traddr": "10.0.0.2", 00:07:13.268 "trsvcid": "4420" 00:07:13.268 } 00:07:13.268 ], 00:07:13.268 "allow_any_host": true, 00:07:13.268 "hosts": [], 00:07:13.268 "serial_number": "SPDK00000000000001", 00:07:13.268 "model_number": "SPDK bdev Controller", 00:07:13.268 "max_namespaces": 32, 00:07:13.268 "min_cntlid": 1, 00:07:13.268 "max_cntlid": 65519, 00:07:13.268 "namespaces": [ 00:07:13.268 { 00:07:13.268 "nsid": 1, 00:07:13.268 "bdev_name": "Null1", 00:07:13.268 "name": "Null1", 00:07:13.268 "nguid": "9F805ACC90FE4977BB05273BC1ABE661", 00:07:13.268 "uuid": "9f805acc-90fe-4977-bb05-273bc1abe661" 00:07:13.268 } 00:07:13.268 ] 00:07:13.268 }, 00:07:13.268 { 00:07:13.268 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:13.268 "subtype": "NVMe", 00:07:13.268 "listen_addresses": [ 00:07:13.268 { 00:07:13.268 "trtype": "TCP", 00:07:13.268 "adrfam": "IPv4", 00:07:13.268 "traddr": "10.0.0.2", 00:07:13.268 "trsvcid": "4420" 00:07:13.268 } 00:07:13.268 ], 00:07:13.268 "allow_any_host": true, 00:07:13.268 "hosts": [], 00:07:13.268 "serial_number": "SPDK00000000000002", 00:07:13.268 "model_number": "SPDK bdev Controller", 00:07:13.268 "max_namespaces": 32, 00:07:13.268 "min_cntlid": 1, 00:07:13.268 "max_cntlid": 65519, 00:07:13.268 "namespaces": [ 00:07:13.268 { 00:07:13.268 "nsid": 1, 00:07:13.268 "bdev_name": "Null2", 00:07:13.268 "name": "Null2", 00:07:13.268 "nguid": "F15937DE03D54B16B8AD90CE45A4506F", 00:07:13.268 "uuid": "f15937de-03d5-4b16-b8ad-90ce45a4506f" 00:07:13.268 } 00:07:13.268 ] 00:07:13.268 }, 00:07:13.268 { 00:07:13.268 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:13.268 "subtype": "NVMe", 00:07:13.268 "listen_addresses": [ 00:07:13.268 { 00:07:13.268 "trtype": "TCP", 00:07:13.268 "adrfam": "IPv4", 00:07:13.268 "traddr": "10.0.0.2", 00:07:13.268 "trsvcid": "4420" 00:07:13.268 } 00:07:13.268 ], 00:07:13.268 "allow_any_host": true, 00:07:13.268 "hosts": [], 00:07:13.268 "serial_number": "SPDK00000000000003", 00:07:13.268 "model_number": "SPDK bdev Controller", 00:07:13.268 "max_namespaces": 32, 00:07:13.268 "min_cntlid": 1, 00:07:13.268 "max_cntlid": 65519, 00:07:13.268 "namespaces": [ 00:07:13.268 { 00:07:13.268 "nsid": 1, 00:07:13.268 "bdev_name": "Null3", 00:07:13.268 "name": "Null3", 00:07:13.268 "nguid": "29C90B78EDB340FFBFF4B8E9BE0586CB", 00:07:13.268 "uuid": "29c90b78-edb3-40ff-bff4-b8e9be0586cb" 00:07:13.268 } 00:07:13.268 ] 00:07:13.268 }, 00:07:13.268 { 00:07:13.268 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:13.268 "subtype": "NVMe", 00:07:13.268 "listen_addresses": [ 00:07:13.268 { 00:07:13.268 "trtype": "TCP", 00:07:13.268 "adrfam": "IPv4", 00:07:13.268 "traddr": "10.0.0.2", 00:07:13.268 "trsvcid": "4420" 00:07:13.268 } 00:07:13.268 ], 00:07:13.268 "allow_any_host": true, 00:07:13.268 "hosts": [], 00:07:13.268 "serial_number": "SPDK00000000000004", 00:07:13.268 "model_number": "SPDK bdev Controller", 00:07:13.268 "max_namespaces": 32, 00:07:13.268 "min_cntlid": 1, 00:07:13.268 "max_cntlid": 65519, 00:07:13.268 "namespaces": [ 00:07:13.268 { 00:07:13.268 "nsid": 1, 00:07:13.268 "bdev_name": "Null4", 00:07:13.268 "name": "Null4", 00:07:13.268 "nguid": "A84BC383C7224ACEB84A36F33378D13B", 00:07:13.268 "uuid": "a84bc383-c722-4ace-b84a-36f33378d13b" 00:07:13.268 } 00:07:13.268 ] 00:07:13.268 } 00:07:13.268 ] 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:13.268 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:13.269 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:13.269 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:13.269 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:13.528 rmmod nvme_tcp 00:07:13.528 rmmod nvme_fabrics 00:07:13.528 rmmod nvme_keyring 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 4051741 ']' 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 4051741 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 4051741 ']' 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 4051741 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4051741 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4051741' 00:07:13.528 killing process with pid 4051741 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 4051741 00:07:13.528 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 4051741 00:07:13.787 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:13.787 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:13.787 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:13.787 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:13.787 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:13.787 22:23:37 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.787 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.787 22:23:37 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.797 22:23:39 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:15.797 00:07:15.797 real 0m9.079s 00:07:15.797 user 0m7.718s 00:07:15.797 sys 0m4.211s 00:07:15.797 22:23:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.797 22:23:39 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:15.797 ************************************ 00:07:15.797 END TEST nvmf_target_discovery 00:07:15.797 ************************************ 00:07:15.797 22:23:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:15.797 22:23:39 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:15.797 22:23:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:15.797 22:23:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.797 22:23:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.797 ************************************ 00:07:15.797 START TEST nvmf_referrals 00:07:15.797 ************************************ 00:07:15.797 22:23:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:15.797 * Looking for test storage... 00:07:15.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:16.058 22:23:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:21.339 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:21.339 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:21.339 Found net devices under 0000:86:00.0: cvl_0_0 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:21.339 Found net devices under 0000:86:00.1: cvl_0_1 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:21.339 22:23:44 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:21.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:07:21.339 00:07:21.339 --- 10.0.0.2 ping statistics --- 00:07:21.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.339 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:21.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:07:21.339 00:07:21.339 --- 10.0.0.1 ping statistics --- 00:07:21.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.339 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=4055366 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 4055366 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 4055366 ']' 00:07:21.339 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.340 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.340 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.340 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.340 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:21.340 [2024-07-15 22:23:45.111817] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:07:21.340 [2024-07-15 22:23:45.111861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.340 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.340 [2024-07-15 22:23:45.171110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.340 [2024-07-15 22:23:45.251885] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.340 [2024-07-15 22:23:45.251921] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.340 [2024-07-15 22:23:45.251928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.340 [2024-07-15 22:23:45.251934] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.340 [2024-07-15 22:23:45.251939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.340 [2024-07-15 22:23:45.251983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.340 [2024-07-15 22:23:45.252014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.340 [2024-07-15 22:23:45.252102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.340 [2024-07-15 22:23:45.252103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.276 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.276 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:22.276 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:22.276 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.276 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.277 [2024-07-15 22:23:45.968259] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.277 [2024-07-15 22:23:45.981639] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.277 22:23:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:22.277 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:22.556 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:22.557 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:22.815 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:23.073 22:23:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:23.073 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:23.073 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:23.073 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:23.073 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:23.073 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:23.073 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.073 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:23.331 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:23.331 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:23.331 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:23.331 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:23.332 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.332 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:23.332 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:23.332 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:23.332 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.332 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:23.332 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.332 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:23.332 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:23.332 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.332 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:23.589 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.589 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:23.589 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:23.589 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:23.589 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:23.589 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:23.589 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:23.590 rmmod nvme_tcp 00:07:23.590 rmmod nvme_fabrics 00:07:23.590 rmmod nvme_keyring 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 4055366 ']' 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 4055366 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 4055366 ']' 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 4055366 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4055366 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4055366' 00:07:23.590 killing process with pid 4055366 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 4055366 00:07:23.590 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 4055366 00:07:23.847 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:23.847 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:23.847 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:23.847 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:23.847 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:23.847 22:23:47 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.847 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.847 22:23:47 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.377 22:23:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:26.377 00:07:26.377 real 0m10.091s 00:07:26.377 user 0m12.204s 00:07:26.377 sys 0m4.599s 00:07:26.377 22:23:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.377 22:23:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:26.377 ************************************ 00:07:26.377 END TEST nvmf_referrals 00:07:26.377 ************************************ 00:07:26.377 22:23:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:26.377 22:23:49 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:26.377 22:23:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:26.377 22:23:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.377 22:23:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:26.377 ************************************ 00:07:26.377 START TEST nvmf_connect_disconnect 00:07:26.377 ************************************ 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:26.377 * Looking for test storage... 00:07:26.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:26.377 22:23:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:31.680 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.680 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:31.680 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:31.680 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:31.680 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:31.680 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:31.680 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:31.680 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:31.681 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:31.681 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:31.681 Found net devices under 0000:86:00.0: cvl_0_0 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:31.681 Found net devices under 0000:86:00.1: cvl_0_1 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:31.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:07:31.681 00:07:31.681 --- 10.0.0.2 ping statistics --- 00:07:31.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.681 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:07:31.681 00:07:31.681 --- 10.0.0.1 ping statistics --- 00:07:31.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.681 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=4059377 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 4059377 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 4059377 ']' 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.681 22:23:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:31.681 [2024-07-15 22:23:55.345232] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:07:31.681 [2024-07-15 22:23:55.345293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.681 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.681 [2024-07-15 22:23:55.403938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.681 [2024-07-15 22:23:55.476242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.682 [2024-07-15 22:23:55.476284] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.682 [2024-07-15 22:23:55.476290] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.682 [2024-07-15 22:23:55.476296] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.682 [2024-07-15 22:23:55.476301] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.682 [2024-07-15 22:23:55.476376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.682 [2024-07-15 22:23:55.476401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.682 [2024-07-15 22:23:55.476494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.682 [2024-07-15 22:23:55.476495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:32.251 [2024-07-15 22:23:56.198307] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.251 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:32.511 [2024-07-15 22:23:56.249992] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:07:32.511 22:23:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:35.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.980 rmmod nvme_tcp 00:07:48.980 rmmod nvme_fabrics 00:07:48.980 rmmod nvme_keyring 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 4059377 ']' 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 4059377 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 4059377 ']' 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 4059377 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4059377 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4059377' 00:07:48.980 killing process with pid 4059377 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 4059377 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 4059377 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.980 22:24:12 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.519 22:24:14 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.519 00:07:51.519 real 0m25.037s 00:07:51.519 user 1m10.416s 00:07:51.519 sys 0m5.164s 00:07:51.519 22:24:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.519 22:24:14 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:51.519 ************************************ 00:07:51.519 END TEST nvmf_connect_disconnect 00:07:51.519 ************************************ 00:07:51.519 22:24:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:51.519 22:24:14 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:51.519 22:24:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:51.519 22:24:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.519 22:24:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:51.519 ************************************ 00:07:51.519 START TEST nvmf_multitarget 00:07:51.519 ************************************ 00:07:51.519 22:24:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:51.519 * Looking for test storage... 00:07:51.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.519 22:24:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:56.794 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:56.794 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:56.794 Found net devices under 0000:86:00.0: cvl_0_0 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:56.794 Found net devices under 0000:86:00.1: cvl_0_1 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:56.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:07:56.794 00:07:56.794 --- 10.0.0.2 ping statistics --- 00:07:56.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.794 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:07:56.794 00:07:56.794 --- 10.0.0.1 ping statistics --- 00:07:56.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.794 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.794 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=4066290 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 4066290 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 4066290 ']' 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.795 22:24:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:56.795 [2024-07-15 22:24:20.471787] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:07:56.795 [2024-07-15 22:24:20.471830] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.795 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.795 [2024-07-15 22:24:20.529024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.795 [2024-07-15 22:24:20.602709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.795 [2024-07-15 22:24:20.602750] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.795 [2024-07-15 22:24:20.602757] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.795 [2024-07-15 22:24:20.602763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.795 [2024-07-15 22:24:20.602771] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.795 [2024-07-15 22:24:20.602838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.795 [2024-07-15 22:24:20.602859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.795 [2024-07-15 22:24:20.602942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.795 [2024-07-15 22:24:20.602944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.364 22:24:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.364 22:24:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:57.364 22:24:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:57.364 22:24:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.364 22:24:21 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:57.364 22:24:21 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.364 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:57.364 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:57.364 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:57.623 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:57.623 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:57.623 "nvmf_tgt_1" 00:07:57.623 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:57.881 "nvmf_tgt_2" 00:07:57.881 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:57.881 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:57.881 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:57.881 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:57.881 true 00:07:57.881 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:58.194 true 00:07:58.194 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:58.194 22:24:21 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:58.194 rmmod nvme_tcp 00:07:58.194 rmmod nvme_fabrics 00:07:58.194 rmmod nvme_keyring 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 4066290 ']' 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 4066290 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 4066290 ']' 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 4066290 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.194 22:24:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4066290 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4066290' 00:07:58.491 killing process with pid 4066290 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 4066290 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 4066290 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.491 22:24:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.026 22:24:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:01.026 00:08:01.026 real 0m9.454s 00:08:01.026 user 0m9.194s 00:08:01.026 sys 0m4.411s 00:08:01.026 22:24:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.026 22:24:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:01.026 ************************************ 00:08:01.026 END TEST nvmf_multitarget 00:08:01.026 ************************************ 00:08:01.026 22:24:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:01.026 22:24:24 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:01.026 22:24:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:01.026 22:24:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.026 22:24:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.026 ************************************ 00:08:01.026 START TEST nvmf_rpc 00:08:01.026 ************************************ 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:01.026 * Looking for test storage... 00:08:01.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:01.026 22:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:06.300 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:06.300 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:06.300 Found net devices under 0000:86:00.0: cvl_0_0 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:06.300 Found net devices under 0000:86:00.1: cvl_0_1 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:06.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:08:06.300 00:08:06.300 --- 10.0.0.2 ping statistics --- 00:08:06.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.300 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:08:06.300 00:08:06.300 --- 10.0.0.1 ping statistics --- 00:08:06.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.300 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=4070066 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 4070066 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 4070066 ']' 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.300 22:24:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.300 [2024-07-15 22:24:29.913712] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:06.300 [2024-07-15 22:24:29.913756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.300 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.300 [2024-07-15 22:24:29.967873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.300 [2024-07-15 22:24:30.055366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.300 [2024-07-15 22:24:30.055403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.300 [2024-07-15 22:24:30.055410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.300 [2024-07-15 22:24:30.055419] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.301 [2024-07-15 22:24:30.055436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.301 [2024-07-15 22:24:30.055484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.301 [2024-07-15 22:24:30.055501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.301 [2024-07-15 22:24:30.055569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.301 [2024-07-15 22:24:30.055570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:06.869 "tick_rate": 2300000000, 00:08:06.869 "poll_groups": [ 00:08:06.869 { 00:08:06.869 "name": "nvmf_tgt_poll_group_000", 00:08:06.869 "admin_qpairs": 0, 00:08:06.869 "io_qpairs": 0, 00:08:06.869 "current_admin_qpairs": 0, 00:08:06.869 "current_io_qpairs": 0, 00:08:06.869 "pending_bdev_io": 0, 00:08:06.869 "completed_nvme_io": 0, 00:08:06.869 "transports": [] 00:08:06.869 }, 00:08:06.869 { 00:08:06.869 "name": "nvmf_tgt_poll_group_001", 00:08:06.869 "admin_qpairs": 0, 00:08:06.869 "io_qpairs": 0, 00:08:06.869 "current_admin_qpairs": 0, 00:08:06.869 "current_io_qpairs": 0, 00:08:06.869 "pending_bdev_io": 0, 00:08:06.869 "completed_nvme_io": 0, 00:08:06.869 "transports": [] 00:08:06.869 }, 00:08:06.869 { 00:08:06.869 "name": "nvmf_tgt_poll_group_002", 00:08:06.869 "admin_qpairs": 0, 00:08:06.869 "io_qpairs": 0, 00:08:06.869 "current_admin_qpairs": 0, 00:08:06.869 "current_io_qpairs": 0, 00:08:06.869 "pending_bdev_io": 0, 00:08:06.869 "completed_nvme_io": 0, 00:08:06.869 "transports": [] 00:08:06.869 }, 00:08:06.869 { 00:08:06.869 "name": "nvmf_tgt_poll_group_003", 00:08:06.869 "admin_qpairs": 0, 00:08:06.869 "io_qpairs": 0, 00:08:06.869 "current_admin_qpairs": 0, 00:08:06.869 "current_io_qpairs": 0, 00:08:06.869 "pending_bdev_io": 0, 00:08:06.869 "completed_nvme_io": 0, 00:08:06.869 "transports": [] 00:08:06.869 } 00:08:06.869 ] 00:08:06.869 }' 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:06.869 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.128 [2024-07-15 22:24:30.877525] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:07.128 "tick_rate": 2300000000, 00:08:07.128 "poll_groups": [ 00:08:07.128 { 00:08:07.128 "name": "nvmf_tgt_poll_group_000", 00:08:07.128 "admin_qpairs": 0, 00:08:07.128 "io_qpairs": 0, 00:08:07.128 "current_admin_qpairs": 0, 00:08:07.128 "current_io_qpairs": 0, 00:08:07.128 "pending_bdev_io": 0, 00:08:07.128 "completed_nvme_io": 0, 00:08:07.128 "transports": [ 00:08:07.128 { 00:08:07.128 "trtype": "TCP" 00:08:07.128 } 00:08:07.128 ] 00:08:07.128 }, 00:08:07.128 { 00:08:07.128 "name": "nvmf_tgt_poll_group_001", 00:08:07.128 "admin_qpairs": 0, 00:08:07.128 "io_qpairs": 0, 00:08:07.128 "current_admin_qpairs": 0, 00:08:07.128 "current_io_qpairs": 0, 00:08:07.128 "pending_bdev_io": 0, 00:08:07.128 "completed_nvme_io": 0, 00:08:07.128 "transports": [ 00:08:07.128 { 00:08:07.128 "trtype": "TCP" 00:08:07.128 } 00:08:07.128 ] 00:08:07.128 }, 00:08:07.128 { 00:08:07.128 "name": "nvmf_tgt_poll_group_002", 00:08:07.128 "admin_qpairs": 0, 00:08:07.128 "io_qpairs": 0, 00:08:07.128 "current_admin_qpairs": 0, 00:08:07.128 "current_io_qpairs": 0, 00:08:07.128 "pending_bdev_io": 0, 00:08:07.128 "completed_nvme_io": 0, 00:08:07.128 "transports": [ 00:08:07.128 { 00:08:07.128 "trtype": "TCP" 00:08:07.128 } 00:08:07.128 ] 00:08:07.128 }, 00:08:07.128 { 00:08:07.128 "name": "nvmf_tgt_poll_group_003", 00:08:07.128 "admin_qpairs": 0, 00:08:07.128 "io_qpairs": 0, 00:08:07.128 "current_admin_qpairs": 0, 00:08:07.128 "current_io_qpairs": 0, 00:08:07.128 "pending_bdev_io": 0, 00:08:07.128 "completed_nvme_io": 0, 00:08:07.128 "transports": [ 00:08:07.128 { 00:08:07.128 "trtype": "TCP" 00:08:07.128 } 00:08:07.128 ] 00:08:07.128 } 00:08:07.128 ] 00:08:07.128 }' 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:07.128 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:07.129 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:07.129 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:07.129 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:07.129 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:07.129 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:07.129 22:24:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:07.129 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.129 22:24:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.129 Malloc1 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.129 [2024-07-15 22:24:31.045489] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:07.129 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:07.129 [2024-07-15 22:24:31.073938] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:07.129 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:07.129 could not add new controller: failed to write to nvme-fabrics device 00:08:07.388 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:07.388 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:07.388 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:07.388 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:07.388 22:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:07.388 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.388 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.388 22:24:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.388 22:24:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:08.326 22:24:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:08.326 22:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:08.326 22:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:08.326 22:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:08.326 22:24:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:10.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.862 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:10.863 [2024-07-15 22:24:34.478478] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:10.863 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:10.863 could not add new controller: failed to write to nvme-fabrics device 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.863 22:24:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:11.800 22:24:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:11.800 22:24:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:11.800 22:24:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:11.800 22:24:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:11.800 22:24:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:13.703 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:13.703 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:13.703 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:13.704 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:13.704 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:13.704 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:13.704 22:24:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:13.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.963 [2024-07-15 22:24:37.797138] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.963 22:24:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:15.340 22:24:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:15.341 22:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:15.341 22:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:15.341 22:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:15.341 22:24:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:17.244 22:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:17.244 22:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:17.244 22:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:17.244 22:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:17.244 22:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:17.244 22:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:17.244 22:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:17.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.244 22:24:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:17.244 22:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:17.244 22:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:17.244 22:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.244 22:24:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.244 [2024-07-15 22:24:41.047677] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.244 22:24:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:18.221 22:24:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:18.221 22:24:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:18.221 22:24:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:18.221 22:24:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:18.221 22:24:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:20.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.755 [2024-07-15 22:24:44.342371] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.755 22:24:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:21.693 22:24:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:21.693 22:24:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:21.693 22:24:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:21.693 22:24:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:21.693 22:24:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:23.599 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:23.599 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:23.599 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:23.599 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:23.599 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:23.599 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:23.599 22:24:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:23.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.859 [2024-07-15 22:24:47.655795] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.859 22:24:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:24.797 22:24:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:24.797 22:24:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:24.797 22:24:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:24.797 22:24:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:24.797 22:24:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:27.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.329 [2024-07-15 22:24:50.909618] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.329 22:24:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:28.264 22:24:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:28.264 22:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:28.264 22:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:28.264 22:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:28.264 22:24:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:30.166 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:30.166 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:30.166 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:30.166 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:30.166 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:30.166 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:30.166 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:30.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.455 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:30.455 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:30.455 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:30.455 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.455 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:30.455 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.455 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:30.455 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:30.455 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.455 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.455 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 [2024-07-15 22:24:54.277849] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 [2024-07-15 22:24:54.325960] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 [2024-07-15 22:24:54.378100] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.456 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.716 [2024-07-15 22:24:54.426306] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.716 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 [2024-07-15 22:24:54.474458] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:30.717 "tick_rate": 2300000000, 00:08:30.717 "poll_groups": [ 00:08:30.717 { 00:08:30.717 "name": "nvmf_tgt_poll_group_000", 00:08:30.717 "admin_qpairs": 2, 00:08:30.717 "io_qpairs": 168, 00:08:30.717 "current_admin_qpairs": 0, 00:08:30.717 "current_io_qpairs": 0, 00:08:30.717 "pending_bdev_io": 0, 00:08:30.717 "completed_nvme_io": 241, 00:08:30.717 "transports": [ 00:08:30.717 { 00:08:30.717 "trtype": "TCP" 00:08:30.717 } 00:08:30.717 ] 00:08:30.717 }, 00:08:30.717 { 00:08:30.717 "name": "nvmf_tgt_poll_group_001", 00:08:30.717 "admin_qpairs": 2, 00:08:30.717 "io_qpairs": 168, 00:08:30.717 "current_admin_qpairs": 0, 00:08:30.717 "current_io_qpairs": 0, 00:08:30.717 "pending_bdev_io": 0, 00:08:30.717 "completed_nvme_io": 258, 00:08:30.717 "transports": [ 00:08:30.717 { 00:08:30.717 "trtype": "TCP" 00:08:30.717 } 00:08:30.717 ] 00:08:30.717 }, 00:08:30.717 { 00:08:30.717 "name": "nvmf_tgt_poll_group_002", 00:08:30.717 "admin_qpairs": 1, 00:08:30.717 "io_qpairs": 168, 00:08:30.717 "current_admin_qpairs": 0, 00:08:30.717 "current_io_qpairs": 0, 00:08:30.717 "pending_bdev_io": 0, 00:08:30.717 "completed_nvme_io": 302, 00:08:30.717 "transports": [ 00:08:30.717 { 00:08:30.717 "trtype": "TCP" 00:08:30.717 } 00:08:30.717 ] 00:08:30.717 }, 00:08:30.717 { 00:08:30.717 "name": "nvmf_tgt_poll_group_003", 00:08:30.717 "admin_qpairs": 2, 00:08:30.717 "io_qpairs": 168, 00:08:30.717 "current_admin_qpairs": 0, 00:08:30.717 "current_io_qpairs": 0, 00:08:30.717 "pending_bdev_io": 0, 00:08:30.717 "completed_nvme_io": 221, 00:08:30.717 "transports": [ 00:08:30.717 { 00:08:30.717 "trtype": "TCP" 00:08:30.717 } 00:08:30.717 ] 00:08:30.717 } 00:08:30.717 ] 00:08:30.717 }' 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:30.717 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:30.718 rmmod nvme_tcp 00:08:30.718 rmmod nvme_fabrics 00:08:30.718 rmmod nvme_keyring 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 4070066 ']' 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 4070066 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 4070066 ']' 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 4070066 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:30.718 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4070066 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4070066' 00:08:30.977 killing process with pid 4070066 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 4070066 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 4070066 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.977 22:24:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.513 22:24:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:33.513 00:08:33.513 real 0m32.530s 00:08:33.513 user 1m40.702s 00:08:33.513 sys 0m5.783s 00:08:33.513 22:24:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.513 22:24:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:33.513 ************************************ 00:08:33.513 END TEST nvmf_rpc 00:08:33.513 ************************************ 00:08:33.513 22:24:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:33.513 22:24:57 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:33.513 22:24:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:33.513 22:24:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.514 22:24:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:33.514 ************************************ 00:08:33.514 START TEST nvmf_invalid 00:08:33.514 ************************************ 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:08:33.514 * Looking for test storage... 00:08:33.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:08:33.514 22:24:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:38.789 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:38.789 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:38.789 Found net devices under 0000:86:00.0: cvl_0_0 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:38.789 Found net devices under 0000:86:00.1: cvl_0_1 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.789 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:08:38.790 00:08:38.790 --- 10.0.0.2 ping statistics --- 00:08:38.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.790 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:08:38.790 00:08:38.790 --- 10.0.0.1 ping statistics --- 00:08:38.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.790 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=4077689 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 4077689 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 4077689 ']' 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.790 22:25:02 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:38.790 [2024-07-15 22:25:02.662433] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:38.790 [2024-07-15 22:25:02.662472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.790 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.790 [2024-07-15 22:25:02.722887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.049 [2024-07-15 22:25:02.800914] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.049 [2024-07-15 22:25:02.800952] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.049 [2024-07-15 22:25:02.800959] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.049 [2024-07-15 22:25:02.800965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.049 [2024-07-15 22:25:02.800970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.049 [2024-07-15 22:25:02.801019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.049 [2024-07-15 22:25:02.801043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.049 [2024-07-15 22:25:02.801106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.049 [2024-07-15 22:25:02.801107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.616 22:25:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.616 22:25:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:08:39.616 22:25:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:39.616 22:25:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:39.616 22:25:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:39.616 22:25:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.616 22:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:39.616 22:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25269 00:08:39.874 [2024-07-15 22:25:03.665619] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:08:39.874 22:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:08:39.874 { 00:08:39.874 "nqn": "nqn.2016-06.io.spdk:cnode25269", 00:08:39.874 "tgt_name": "foobar", 00:08:39.874 "method": "nvmf_create_subsystem", 00:08:39.874 "req_id": 1 00:08:39.874 } 00:08:39.874 Got JSON-RPC error response 00:08:39.874 response: 00:08:39.874 { 00:08:39.874 "code": -32603, 00:08:39.874 "message": "Unable to find target foobar" 00:08:39.874 }' 00:08:39.874 22:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:08:39.874 { 00:08:39.874 "nqn": "nqn.2016-06.io.spdk:cnode25269", 00:08:39.874 "tgt_name": "foobar", 00:08:39.874 "method": "nvmf_create_subsystem", 00:08:39.874 "req_id": 1 00:08:39.874 } 00:08:39.874 Got JSON-RPC error response 00:08:39.874 response: 00:08:39.874 { 00:08:39.874 "code": -32603, 00:08:39.874 "message": "Unable to find target foobar" 00:08:39.874 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:08:39.874 22:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:08:39.874 22:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27037 00:08:40.133 [2024-07-15 22:25:03.850298] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27037: invalid serial number 'SPDKISFASTANDAWESOME' 00:08:40.133 22:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:08:40.133 { 00:08:40.133 "nqn": "nqn.2016-06.io.spdk:cnode27037", 00:08:40.133 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:40.133 "method": "nvmf_create_subsystem", 00:08:40.133 "req_id": 1 00:08:40.133 } 00:08:40.133 Got JSON-RPC error response 00:08:40.133 response: 00:08:40.133 { 00:08:40.133 "code": -32602, 00:08:40.133 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:40.133 }' 00:08:40.133 22:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:08:40.133 { 00:08:40.133 "nqn": "nqn.2016-06.io.spdk:cnode27037", 00:08:40.133 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:08:40.133 "method": "nvmf_create_subsystem", 00:08:40.133 "req_id": 1 00:08:40.133 } 00:08:40.133 Got JSON-RPC error response 00:08:40.133 response: 00:08:40.133 { 00:08:40.133 "code": -32602, 00:08:40.133 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:08:40.133 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:40.133 22:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:08:40.133 22:25:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24729 00:08:40.133 [2024-07-15 22:25:04.046941] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24729: invalid model number 'SPDK_Controller' 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:08:40.133 { 00:08:40.133 "nqn": "nqn.2016-06.io.spdk:cnode24729", 00:08:40.133 "model_number": "SPDK_Controller\u001f", 00:08:40.133 "method": "nvmf_create_subsystem", 00:08:40.133 "req_id": 1 00:08:40.133 } 00:08:40.133 Got JSON-RPC error response 00:08:40.133 response: 00:08:40.133 { 00:08:40.133 "code": -32602, 00:08:40.133 "message": "Invalid MN SPDK_Controller\u001f" 00:08:40.133 }' 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:08:40.133 { 00:08:40.133 "nqn": "nqn.2016-06.io.spdk:cnode24729", 00:08:40.133 "model_number": "SPDK_Controller\u001f", 00:08:40.133 "method": "nvmf_create_subsystem", 00:08:40.133 "req_id": 1 00:08:40.133 } 00:08:40.133 Got JSON-RPC error response 00:08:40.133 response: 00:08:40.133 { 00:08:40.133 "code": -32602, 00:08:40.133 "message": "Invalid MN SPDK_Controller\u001f" 00:08:40.133 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.133 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 4 == \- ]] 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '4]fw(#7 C3%K_M6!WZ>Z"' 00:08:40.393 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '4]fw(#7 C3%K_M6!WZ>Z"' nqn.2016-06.io.spdk:cnode15704 00:08:40.653 [2024-07-15 22:25:04.368049] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15704: invalid serial number '4]fw(#7 C3%K_M6!WZ>Z"' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:08:40.653 { 00:08:40.653 "nqn": "nqn.2016-06.io.spdk:cnode15704", 00:08:40.653 "serial_number": "4]fw(#7 C3%K_M6!WZ>Z\"", 00:08:40.653 "method": "nvmf_create_subsystem", 00:08:40.653 "req_id": 1 00:08:40.653 } 00:08:40.653 Got JSON-RPC error response 00:08:40.653 response: 00:08:40.653 { 00:08:40.653 "code": -32602, 00:08:40.653 "message": "Invalid SN 4]fw(#7 C3%K_M6!WZ>Z\"" 00:08:40.653 }' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:08:40.653 { 00:08:40.653 "nqn": "nqn.2016-06.io.spdk:cnode15704", 00:08:40.653 "serial_number": "4]fw(#7 C3%K_M6!WZ>Z\"", 00:08:40.653 "method": "nvmf_create_subsystem", 00:08:40.653 "req_id": 1 00:08:40.653 } 00:08:40.653 Got JSON-RPC error response 00:08:40.653 response: 00:08:40.653 { 00:08:40.653 "code": -32602, 00:08:40.653 "message": "Invalid SN 4]fw(#7 C3%K_M6!WZ>Z\"" 00:08:40.653 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.653 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.654 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.912 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:08:40.912 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:08:40.912 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:08:40.912 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.912 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.912 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:08:40.912 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:08:40.912 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:08:40.912 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.912 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ j == \- ]] 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'j16|V2iWK]q:0.=x[o~[g`"`||/{F,h$&B1 W`X@' 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'j16|V2iWK]q:0.=x[o~[g`"`||/{F,h$&B1 W`X@' nqn.2016-06.io.spdk:cnode7699 00:08:40.913 [2024-07-15 22:25:04.825645] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7699: invalid model number 'j16|V2iWK]q:0.=x[o~[g`"`||/{F,h$&B1 W`X@' 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:08:40.913 { 00:08:40.913 "nqn": "nqn.2016-06.io.spdk:cnode7699", 00:08:40.913 "model_number": "j16|V2i\u007fWK]q:0.=x[o~[g`\"`||/{F,h$&B1 W`X@", 00:08:40.913 "method": "nvmf_create_subsystem", 00:08:40.913 "req_id": 1 00:08:40.913 } 00:08:40.913 Got JSON-RPC error response 00:08:40.913 response: 00:08:40.913 { 00:08:40.913 "code": -32602, 00:08:40.913 "message": "Invalid MN j16|V2i\u007fWK]q:0.=x[o~[g`\"`||/{F,h$&B1 W`X@" 00:08:40.913 }' 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:08:40.913 { 00:08:40.913 "nqn": "nqn.2016-06.io.spdk:cnode7699", 00:08:40.913 "model_number": "j16|V2i\u007fWK]q:0.=x[o~[g`\"`||/{F,h$&B1 W`X@", 00:08:40.913 "method": "nvmf_create_subsystem", 00:08:40.913 "req_id": 1 00:08:40.913 } 00:08:40.913 Got JSON-RPC error response 00:08:40.913 response: 00:08:40.913 { 00:08:40.913 "code": -32602, 00:08:40.913 "message": "Invalid MN j16|V2i\u007fWK]q:0.=x[o~[g`\"`||/{F,h$&B1 W`X@" 00:08:40.913 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:08:40.913 22:25:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:08:41.170 [2024-07-15 22:25:05.010357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:41.170 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:08:41.428 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:08:41.428 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:08:41.428 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:08:41.428 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:08:41.428 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:08:41.428 [2024-07-15 22:25:05.387596] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:08:41.685 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:08:41.685 { 00:08:41.685 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:41.685 "listen_address": { 00:08:41.685 "trtype": "tcp", 00:08:41.685 "traddr": "", 00:08:41.685 "trsvcid": "4421" 00:08:41.685 }, 00:08:41.685 "method": "nvmf_subsystem_remove_listener", 00:08:41.685 "req_id": 1 00:08:41.685 } 00:08:41.685 Got JSON-RPC error response 00:08:41.685 response: 00:08:41.685 { 00:08:41.685 "code": -32602, 00:08:41.685 "message": "Invalid parameters" 00:08:41.685 }' 00:08:41.685 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:08:41.685 { 00:08:41.685 "nqn": "nqn.2016-06.io.spdk:cnode", 00:08:41.685 "listen_address": { 00:08:41.685 "trtype": "tcp", 00:08:41.685 "traddr": "", 00:08:41.685 "trsvcid": "4421" 00:08:41.685 }, 00:08:41.685 "method": "nvmf_subsystem_remove_listener", 00:08:41.685 "req_id": 1 00:08:41.685 } 00:08:41.685 Got JSON-RPC error response 00:08:41.685 response: 00:08:41.685 { 00:08:41.685 "code": -32602, 00:08:41.685 "message": "Invalid parameters" 00:08:41.685 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:08:41.685 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14434 -i 0 00:08:41.685 [2024-07-15 22:25:05.564153] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14434: invalid cntlid range [0-65519] 00:08:41.685 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:08:41.685 { 00:08:41.685 "nqn": "nqn.2016-06.io.spdk:cnode14434", 00:08:41.685 "min_cntlid": 0, 00:08:41.685 "method": "nvmf_create_subsystem", 00:08:41.685 "req_id": 1 00:08:41.685 } 00:08:41.685 Got JSON-RPC error response 00:08:41.685 response: 00:08:41.685 { 00:08:41.685 "code": -32602, 00:08:41.685 "message": "Invalid cntlid range [0-65519]" 00:08:41.685 }' 00:08:41.685 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:08:41.685 { 00:08:41.685 "nqn": "nqn.2016-06.io.spdk:cnode14434", 00:08:41.685 "min_cntlid": 0, 00:08:41.685 "method": "nvmf_create_subsystem", 00:08:41.685 "req_id": 1 00:08:41.685 } 00:08:41.685 Got JSON-RPC error response 00:08:41.685 response: 00:08:41.685 { 00:08:41.685 "code": -32602, 00:08:41.685 "message": "Invalid cntlid range [0-65519]" 00:08:41.685 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:41.685 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2240 -i 65520 00:08:41.943 [2024-07-15 22:25:05.736751] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2240: invalid cntlid range [65520-65519] 00:08:41.943 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:08:41.943 { 00:08:41.943 "nqn": "nqn.2016-06.io.spdk:cnode2240", 00:08:41.943 "min_cntlid": 65520, 00:08:41.943 "method": "nvmf_create_subsystem", 00:08:41.943 "req_id": 1 00:08:41.943 } 00:08:41.943 Got JSON-RPC error response 00:08:41.943 response: 00:08:41.943 { 00:08:41.943 "code": -32602, 00:08:41.943 "message": "Invalid cntlid range [65520-65519]" 00:08:41.943 }' 00:08:41.943 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:08:41.943 { 00:08:41.943 "nqn": "nqn.2016-06.io.spdk:cnode2240", 00:08:41.943 "min_cntlid": 65520, 00:08:41.943 "method": "nvmf_create_subsystem", 00:08:41.943 "req_id": 1 00:08:41.943 } 00:08:41.943 Got JSON-RPC error response 00:08:41.943 response: 00:08:41.943 { 00:08:41.943 "code": -32602, 00:08:41.943 "message": "Invalid cntlid range [65520-65519]" 00:08:41.943 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:41.943 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14478 -I 0 00:08:41.943 [2024-07-15 22:25:05.913350] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14478: invalid cntlid range [1-0] 00:08:42.202 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:08:42.202 { 00:08:42.202 "nqn": "nqn.2016-06.io.spdk:cnode14478", 00:08:42.202 "max_cntlid": 0, 00:08:42.202 "method": "nvmf_create_subsystem", 00:08:42.202 "req_id": 1 00:08:42.202 } 00:08:42.202 Got JSON-RPC error response 00:08:42.202 response: 00:08:42.202 { 00:08:42.202 "code": -32602, 00:08:42.202 "message": "Invalid cntlid range [1-0]" 00:08:42.202 }' 00:08:42.202 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:08:42.202 { 00:08:42.202 "nqn": "nqn.2016-06.io.spdk:cnode14478", 00:08:42.202 "max_cntlid": 0, 00:08:42.202 "method": "nvmf_create_subsystem", 00:08:42.202 "req_id": 1 00:08:42.202 } 00:08:42.202 Got JSON-RPC error response 00:08:42.202 response: 00:08:42.202 { 00:08:42.202 "code": -32602, 00:08:42.202 "message": "Invalid cntlid range [1-0]" 00:08:42.202 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:42.202 22:25:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17740 -I 65520 00:08:42.202 [2024-07-15 22:25:06.106019] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17740: invalid cntlid range [1-65520] 00:08:42.202 22:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:08:42.202 { 00:08:42.202 "nqn": "nqn.2016-06.io.spdk:cnode17740", 00:08:42.202 "max_cntlid": 65520, 00:08:42.202 "method": "nvmf_create_subsystem", 00:08:42.202 "req_id": 1 00:08:42.202 } 00:08:42.202 Got JSON-RPC error response 00:08:42.202 response: 00:08:42.202 { 00:08:42.202 "code": -32602, 00:08:42.202 "message": "Invalid cntlid range [1-65520]" 00:08:42.202 }' 00:08:42.202 22:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:08:42.202 { 00:08:42.202 "nqn": "nqn.2016-06.io.spdk:cnode17740", 00:08:42.202 "max_cntlid": 65520, 00:08:42.202 "method": "nvmf_create_subsystem", 00:08:42.202 "req_id": 1 00:08:42.202 } 00:08:42.202 Got JSON-RPC error response 00:08:42.202 response: 00:08:42.202 { 00:08:42.202 "code": -32602, 00:08:42.202 "message": "Invalid cntlid range [1-65520]" 00:08:42.202 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:42.202 22:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25533 -i 6 -I 5 00:08:42.459 [2024-07-15 22:25:06.294661] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25533: invalid cntlid range [6-5] 00:08:42.459 22:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:08:42.459 { 00:08:42.459 "nqn": "nqn.2016-06.io.spdk:cnode25533", 00:08:42.459 "min_cntlid": 6, 00:08:42.459 "max_cntlid": 5, 00:08:42.459 "method": "nvmf_create_subsystem", 00:08:42.459 "req_id": 1 00:08:42.459 } 00:08:42.459 Got JSON-RPC error response 00:08:42.459 response: 00:08:42.459 { 00:08:42.459 "code": -32602, 00:08:42.459 "message": "Invalid cntlid range [6-5]" 00:08:42.459 }' 00:08:42.459 22:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:08:42.459 { 00:08:42.459 "nqn": "nqn.2016-06.io.spdk:cnode25533", 00:08:42.459 "min_cntlid": 6, 00:08:42.459 "max_cntlid": 5, 00:08:42.459 "method": "nvmf_create_subsystem", 00:08:42.459 "req_id": 1 00:08:42.459 } 00:08:42.459 Got JSON-RPC error response 00:08:42.459 response: 00:08:42.459 { 00:08:42.459 "code": -32602, 00:08:42.459 "message": "Invalid cntlid range [6-5]" 00:08:42.459 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:08:42.459 22:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:08:42.459 22:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:08:42.459 { 00:08:42.460 "name": "foobar", 00:08:42.460 "method": "nvmf_delete_target", 00:08:42.460 "req_id": 1 00:08:42.460 } 00:08:42.460 Got JSON-RPC error response 00:08:42.460 response: 00:08:42.460 { 00:08:42.460 "code": -32602, 00:08:42.460 "message": "The specified target doesn'\''t exist, cannot delete it." 00:08:42.460 }' 00:08:42.460 22:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:08:42.460 { 00:08:42.460 "name": "foobar", 00:08:42.460 "method": "nvmf_delete_target", 00:08:42.460 "req_id": 1 00:08:42.460 } 00:08:42.460 Got JSON-RPC error response 00:08:42.460 response: 00:08:42.460 { 00:08:42.460 "code": -32602, 00:08:42.460 "message": "The specified target doesn't exist, cannot delete it." 00:08:42.460 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:08:42.460 22:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:08:42.460 22:25:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:08:42.460 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:42.460 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:08:42.460 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:42.460 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:08:42.460 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:42.460 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:42.460 rmmod nvme_tcp 00:08:42.718 rmmod nvme_fabrics 00:08:42.718 rmmod nvme_keyring 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 4077689 ']' 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 4077689 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 4077689 ']' 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 4077689 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4077689 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4077689' 00:08:42.718 killing process with pid 4077689 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 4077689 00:08:42.718 22:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 4077689 00:08:42.976 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:42.976 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:42.976 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:42.976 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.976 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:42.976 22:25:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.976 22:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.976 22:25:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.901 22:25:08 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:44.901 00:08:44.901 real 0m11.723s 00:08:44.901 user 0m19.468s 00:08:44.901 sys 0m4.940s 00:08:44.901 22:25:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.901 22:25:08 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:08:44.901 ************************************ 00:08:44.901 END TEST nvmf_invalid 00:08:44.901 ************************************ 00:08:44.901 22:25:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:44.901 22:25:08 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:44.901 22:25:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.901 22:25:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.901 22:25:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.901 ************************************ 00:08:44.901 START TEST nvmf_abort 00:08:44.901 ************************************ 00:08:44.901 22:25:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:45.160 * Looking for test storage... 00:08:45.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.160 22:25:08 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.160 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:45.160 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.160 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.160 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.160 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.160 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.160 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.160 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.160 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.160 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.160 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:08:45.161 22:25:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:50.440 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:50.440 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:50.440 Found net devices under 0000:86:00.0: cvl_0_0 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:50.440 Found net devices under 0000:86:00.1: cvl_0_1 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:50.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:08:50.440 00:08:50.440 --- 10.0.0.2 ping statistics --- 00:08:50.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.440 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:50.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:08:50.440 00:08:50.440 --- 10.0.0.1 ping statistics --- 00:08:50.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.440 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=4081954 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 4081954 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 4081954 ']' 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.440 22:25:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.441 22:25:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.441 22:25:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.441 22:25:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:50.441 [2024-07-15 22:25:14.350864] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:50.441 [2024-07-15 22:25:14.350904] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.441 EAL: No free 2048 kB hugepages reported on node 1 00:08:50.441 [2024-07-15 22:25:14.409306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:50.699 [2024-07-15 22:25:14.481523] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.699 [2024-07-15 22:25:14.481564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.699 [2024-07-15 22:25:14.481571] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.699 [2024-07-15 22:25:14.481577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.699 [2024-07-15 22:25:14.481581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.699 [2024-07-15 22:25:14.481657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.699 [2024-07-15 22:25:14.481742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.699 [2024-07-15 22:25:14.481743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:51.268 [2024-07-15 22:25:15.206166] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.268 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:51.527 Malloc0 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:51.527 Delay0 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:51.527 [2024-07-15 22:25:15.282112] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.527 22:25:15 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:51.527 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.527 [2024-07-15 22:25:15.388955] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:54.063 Initializing NVMe Controllers 00:08:54.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:54.063 controller IO queue size 128 less than required 00:08:54.063 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:54.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:54.063 Initialization complete. Launching workers. 00:08:54.063 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42635 00:08:54.063 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42696, failed to submit 62 00:08:54.063 success 42639, unsuccess 57, failed 0 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:54.063 rmmod nvme_tcp 00:08:54.063 rmmod nvme_fabrics 00:08:54.063 rmmod nvme_keyring 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 4081954 ']' 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 4081954 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 4081954 ']' 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 4081954 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4081954 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4081954' 00:08:54.063 killing process with pid 4081954 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 4081954 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 4081954 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:54.063 22:25:17 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.967 22:25:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:55.967 00:08:55.967 real 0m11.075s 00:08:55.967 user 0m13.322s 00:08:55.967 sys 0m5.016s 00:08:55.967 22:25:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.967 22:25:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:55.967 ************************************ 00:08:55.967 END TEST nvmf_abort 00:08:55.967 ************************************ 00:08:56.227 22:25:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:56.227 22:25:19 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:56.227 22:25:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:56.227 22:25:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.227 22:25:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:56.227 ************************************ 00:08:56.227 START TEST nvmf_ns_hotplug_stress 00:08:56.227 ************************************ 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:56.227 * Looking for test storage... 00:08:56.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:56.227 22:25:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:01.507 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:01.507 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:01.507 Found net devices under 0000:86:00.0: cvl_0_0 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:01.507 Found net devices under 0000:86:00.1: cvl_0_1 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.507 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:01.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:09:01.508 00:09:01.508 --- 10.0.0.2 ping statistics --- 00:09:01.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.508 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:09:01.508 00:09:01.508 --- 10.0.0.1 ping statistics --- 00:09:01.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.508 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=4085907 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 4085907 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 4085907 ']' 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.508 22:25:25 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:01.508 [2024-07-15 22:25:25.343705] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:09:01.508 [2024-07-15 22:25:25.343746] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.508 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.508 [2024-07-15 22:25:25.400271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:01.766 [2024-07-15 22:25:25.479493] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.766 [2024-07-15 22:25:25.479528] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.766 [2024-07-15 22:25:25.479535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.767 [2024-07-15 22:25:25.479541] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.767 [2024-07-15 22:25:25.479546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.767 [2024-07-15 22:25:25.479610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:01.767 [2024-07-15 22:25:25.479693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.767 [2024-07-15 22:25:25.479694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.332 22:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.332 22:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:02.332 22:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.332 22:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.332 22:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:02.332 22:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.332 22:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:02.332 22:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:02.591 [2024-07-15 22:25:26.331853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.591 22:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:02.591 22:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.849 [2024-07-15 22:25:26.709176] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.849 22:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:03.107 22:25:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:03.365 Malloc0 00:09:03.365 22:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:03.365 Delay0 00:09:03.365 22:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:03.623 22:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:03.882 NULL1 00:09:03.882 22:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:03.882 22:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:03.882 22:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=4086344 00:09:03.882 22:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:03.882 22:25:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.140 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.140 Read completed with error (sct=0, sc=11) 00:09:04.140 22:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.140 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.401 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.401 22:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:04.401 22:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:04.661 true 00:09:04.661 22:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:04.661 22:25:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:05.601 22:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:05.601 22:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:05.601 22:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:05.860 true 00:09:05.860 22:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:05.860 22:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:05.860 22:25:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.139 22:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:06.139 22:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:06.399 true 00:09:06.399 22:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:06.399 22:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.658 22:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.658 22:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:06.658 22:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:06.919 true 00:09:06.919 22:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:06.919 22:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:07.178 22:25:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.178 22:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:07.178 22:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:07.437 true 00:09:07.437 22:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:07.437 22:25:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.814 22:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.814 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.814 22:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:08.814 22:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:09.072 true 00:09:09.072 22:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:09.072 22:25:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.008 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:10.008 22:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.008 22:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:10.008 22:25:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:10.267 true 00:09:10.267 22:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:10.267 22:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.267 22:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.525 22:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:10.525 22:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:10.784 true 00:09:10.784 22:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:10.784 22:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.042 22:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.042 22:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:11.043 22:25:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:11.301 true 00:09:11.301 22:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:11.301 22:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:11.559 22:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.818 22:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:11.818 22:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:11.818 true 00:09:11.818 22:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:11.818 22:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.077 22:25:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.335 22:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:12.335 22:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:12.335 true 00:09:12.335 22:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:12.335 22:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.593 22:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.852 22:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:12.852 22:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:12.852 true 00:09:12.852 22:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:12.852 22:25:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.230 22:25:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:14.230 22:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:14.230 22:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:14.489 true 00:09:14.489 22:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:14.489 22:25:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.425 22:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.425 22:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:15.425 22:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:15.684 true 00:09:15.684 22:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:15.684 22:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.944 22:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:15.944 22:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:15.944 22:25:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:16.203 true 00:09:16.203 22:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:16.203 22:25:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.579 22:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:17.579 22:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:17.579 22:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:17.836 true 00:09:17.836 22:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:17.836 22:25:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.770 22:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:18.770 22:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:18.770 22:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:19.028 true 00:09:19.028 22:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:19.028 22:25:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.285 22:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.285 22:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:19.285 22:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:19.542 true 00:09:19.542 22:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:19.542 22:25:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.919 22:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.919 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:20.919 22:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:20.919 22:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:21.178 true 00:09:21.178 22:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:21.178 22:25:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.114 22:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.114 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:22.114 22:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:22.114 22:25:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:22.373 true 00:09:22.373 22:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:22.373 22:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.631 22:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.631 22:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:22.631 22:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:22.889 true 00:09:22.889 22:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:22.889 22:25:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.269 22:25:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.269 22:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:24.269 22:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:24.528 true 00:09:24.528 22:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:24.528 22:25:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.096 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.355 22:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.355 22:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:25.355 22:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:25.615 true 00:09:25.615 22:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:25.615 22:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.903 22:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.903 22:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:25.903 22:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:26.163 true 00:09:26.163 22:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:26.163 22:25:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.422 22:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.422 22:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:26.422 22:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:26.680 true 00:09:26.680 22:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:26.680 22:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.939 22:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.198 22:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:27.198 22:25:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:27.198 true 00:09:27.198 22:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:27.198 22:25:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.600 22:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.600 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.600 22:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:28.600 22:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:28.859 true 00:09:28.859 22:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:28.859 22:25:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.800 22:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.800 22:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:29.800 22:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:30.059 true 00:09:30.059 22:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:30.059 22:25:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.318 22:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.318 22:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:30.318 22:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:30.577 true 00:09:30.577 22:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:30.577 22:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.836 22:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.836 22:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:30.836 22:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:31.094 true 00:09:31.095 22:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:31.095 22:25:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.030 22:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.030 22:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:32.030 22:25:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:09:32.289 true 00:09:32.289 22:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:32.289 22:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.549 22:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.808 22:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:09:32.808 22:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:09:32.808 true 00:09:32.808 22:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:32.808 22:25:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.187 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.187 22:25:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.187 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:09:34.187 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:09:34.187 Initializing NVMe Controllers 00:09:34.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:34.187 Controller IO queue size 128, less than required. 00:09:34.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:34.187 Controller IO queue size 128, less than required. 00:09:34.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:34.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:34.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:34.187 Initialization complete. Launching workers. 00:09:34.187 ======================================================== 00:09:34.187 Latency(us) 00:09:34.187 Device Information : IOPS MiB/s Average min max 00:09:34.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1773.70 0.87 41979.66 1815.13 1112932.60 00:09:34.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14614.07 7.14 8759.45 2005.32 454777.92 00:09:34.187 ======================================================== 00:09:34.187 Total : 16387.77 8.00 12354.98 1815.13 1112932.60 00:09:34.187 00:09:34.446 true 00:09:34.446 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 4086344 00:09:34.446 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (4086344) - No such process 00:09:34.446 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 4086344 00:09:34.446 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.705 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:34.705 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:34.705 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:34.705 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:34.705 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:34.705 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:34.964 null0 00:09:34.964 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:34.964 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:34.964 22:25:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:35.223 null1 00:09:35.223 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:35.223 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:35.223 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:35.223 null2 00:09:35.223 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:35.223 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:35.223 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:35.482 null3 00:09:35.482 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:35.482 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:35.482 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:35.741 null4 00:09:35.741 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:35.741 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:35.741 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:35.741 null5 00:09:35.741 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:35.741 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:35.741 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:36.000 null6 00:09:36.000 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:36.000 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:36.000 22:25:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:36.261 null7 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 4091943 4091945 4091948 4091951 4091955 4091958 4091961 4091965 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.261 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.522 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:36.782 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:36.783 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:36.783 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.042 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.043 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:37.043 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:37.043 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.043 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.043 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:37.043 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.043 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.043 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:37.043 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.043 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.043 22:26:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.302 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:37.303 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.303 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.303 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.303 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.303 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:37.303 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:37.561 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:37.561 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:37.561 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.561 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:37.561 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.561 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:37.561 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:37.561 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:37.820 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:38.080 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:38.080 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:38.080 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:38.080 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:38.080 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.080 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:38.080 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:38.080 22:26:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.080 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:38.339 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:38.339 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:38.339 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:38.339 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:38.339 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.340 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:38.340 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:38.340 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:38.599 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.858 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.859 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:38.859 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.859 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.859 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:38.859 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.859 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.859 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:38.859 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:38.859 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:38.859 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:39.118 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.118 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:39.118 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:39.118 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:39.118 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:39.118 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:39.118 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:39.118 22:26:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:39.384 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.384 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.384 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:39.384 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.384 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.384 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:39.384 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.384 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.384 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:39.384 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.384 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.384 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:39.385 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:39.646 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:39.904 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.904 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:39.904 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:39.904 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:39.904 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:39.904 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:39.904 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:39.904 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:40.161 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:40.161 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.161 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:40.161 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.161 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:40.161 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.161 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:40.161 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.161 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:40.161 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:40.161 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.162 rmmod nvme_tcp 00:09:40.162 rmmod nvme_fabrics 00:09:40.162 rmmod nvme_keyring 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 4085907 ']' 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 4085907 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 4085907 ']' 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 4085907 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.162 22:26:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4085907 00:09:40.162 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:40.162 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:40.162 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4085907' 00:09:40.162 killing process with pid 4085907 00:09:40.162 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 4085907 00:09:40.162 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 4085907 00:09:40.420 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.420 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:40.420 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:40.420 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.420 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:40.420 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.420 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.421 22:26:04 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.322 22:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:42.323 00:09:42.323 real 0m46.276s 00:09:42.323 user 3m11.139s 00:09:42.323 sys 0m14.255s 00:09:42.323 22:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:42.323 22:26:06 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:42.323 ************************************ 00:09:42.323 END TEST nvmf_ns_hotplug_stress 00:09:42.323 ************************************ 00:09:42.581 22:26:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:42.581 22:26:06 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:42.581 22:26:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:42.581 22:26:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.581 22:26:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:42.581 ************************************ 00:09:42.581 START TEST nvmf_connect_stress 00:09:42.581 ************************************ 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:09:42.581 * Looking for test storage... 00:09:42.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.581 22:26:06 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:42.582 22:26:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:47.883 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:47.883 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:47.883 Found net devices under 0000:86:00.0: cvl_0_0 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:47.883 Found net devices under 0000:86:00.1: cvl_0_1 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.883 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:47.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:09:47.884 00:09:47.884 --- 10.0.0.2 ping statistics --- 00:09:47.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.884 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:09:47.884 00:09:47.884 --- 10.0.0.1 ping statistics --- 00:09:47.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.884 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=4096104 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 4096104 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 4096104 ']' 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.884 22:26:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:47.884 [2024-07-15 22:26:11.796768] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:09:47.884 [2024-07-15 22:26:11.796809] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.884 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.884 [2024-07-15 22:26:11.852524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:48.143 [2024-07-15 22:26:11.925036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.143 [2024-07-15 22:26:11.925071] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.143 [2024-07-15 22:26:11.925078] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.143 [2024-07-15 22:26:11.925084] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.143 [2024-07-15 22:26:11.925092] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.143 [2024-07-15 22:26:11.925164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.143 [2024-07-15 22:26:11.925249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.143 [2024-07-15 22:26:11.925251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.711 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.711 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:09:48.711 22:26:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.711 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.711 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.711 22:26:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.711 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:48.711 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.711 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.711 [2024-07-15 22:26:12.649748] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.712 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.712 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:48.712 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.712 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.712 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.712 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.712 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.712 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.972 [2024-07-15 22:26:12.690328] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:48.972 NULL1 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4096348 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.972 22:26:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:49.232 22:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.232 22:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:49.232 22:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:49.232 22:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.232 22:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:49.496 22:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.496 22:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:49.496 22:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:49.496 22:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.496 22:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.065 22:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.065 22:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:50.065 22:26:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.065 22:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.065 22:26:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.324 22:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.324 22:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:50.324 22:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.324 22:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.324 22:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.583 22:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.583 22:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:50.583 22:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.583 22:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.583 22:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:50.842 22:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.842 22:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:50.842 22:26:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:50.842 22:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.842 22:26:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:51.101 22:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.101 22:26:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:51.101 22:26:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:51.101 22:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.101 22:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:51.670 22:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.670 22:26:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:51.670 22:26:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:51.670 22:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.670 22:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:51.929 22:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.929 22:26:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:51.929 22:26:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:51.930 22:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.930 22:26:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:52.189 22:26:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.189 22:26:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:52.189 22:26:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:52.189 22:26:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.189 22:26:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:52.448 22:26:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.448 22:26:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:52.448 22:26:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:52.448 22:26:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.448 22:26:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:53.016 22:26:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.016 22:26:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:53.016 22:26:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:53.016 22:26:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.016 22:26:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:53.275 22:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.275 22:26:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:53.275 22:26:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:53.275 22:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.275 22:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:53.534 22:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.534 22:26:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:53.534 22:26:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:53.534 22:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.534 22:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:53.794 22:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.794 22:26:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:53.794 22:26:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:53.794 22:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.794 22:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:54.053 22:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.053 22:26:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:54.053 22:26:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:54.053 22:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.053 22:26:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:54.622 22:26:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.622 22:26:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:54.622 22:26:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:54.622 22:26:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.622 22:26:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:54.881 22:26:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:54.881 22:26:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:54.881 22:26:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:54.881 22:26:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:54.881 22:26:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:55.140 22:26:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.140 22:26:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:55.140 22:26:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:55.140 22:26:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.140 22:26:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:55.398 22:26:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.398 22:26:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:55.398 22:26:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:55.398 22:26:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.399 22:26:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:55.658 22:26:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.658 22:26:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:55.658 22:26:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:55.658 22:26:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.658 22:26:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.227 22:26:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.227 22:26:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:56.227 22:26:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.227 22:26:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.227 22:26:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.485 22:26:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.485 22:26:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:56.485 22:26:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.485 22:26:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.485 22:26:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.744 22:26:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:56.744 22:26:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:56.744 22:26:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:56.744 22:26:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:56.744 22:26:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:57.003 22:26:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.003 22:26:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:57.003 22:26:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:57.003 22:26:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.003 22:26:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:57.571 22:26:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.571 22:26:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:57.571 22:26:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:57.571 22:26:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.571 22:26:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:57.830 22:26:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.830 22:26:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:57.830 22:26:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:57.830 22:26:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.830 22:26:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.089 22:26:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.089 22:26:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:58.089 22:26:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:58.089 22:26:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.089 22:26:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.347 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.347 22:26:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:58.347 22:26:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:58.347 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.347 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:58.605 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:58.605 22:26:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:58.605 22:26:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:09:58.605 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:58.605 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.172 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:59.172 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.172 22:26:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4096348 00:09:59.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4096348) - No such process 00:09:59.172 22:26:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4096348 00:09:59.172 22:26:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:09:59.172 22:26:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:59.172 22:26:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:09:59.172 22:26:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:59.172 22:26:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:09:59.172 22:26:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:59.173 rmmod nvme_tcp 00:09:59.173 rmmod nvme_fabrics 00:09:59.173 rmmod nvme_keyring 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 4096104 ']' 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 4096104 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 4096104 ']' 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 4096104 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4096104 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4096104' 00:09:59.173 killing process with pid 4096104 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 4096104 00:09:59.173 22:26:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 4096104 00:09:59.436 22:26:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:59.436 22:26:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:59.436 22:26:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:59.436 22:26:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:59.436 22:26:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:59.436 22:26:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.436 22:26:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.436 22:26:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.343 22:26:25 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:01.343 00:10:01.343 real 0m18.861s 00:10:01.343 user 0m41.083s 00:10:01.343 sys 0m7.816s 00:10:01.343 22:26:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.343 22:26:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:01.343 ************************************ 00:10:01.343 END TEST nvmf_connect_stress 00:10:01.343 ************************************ 00:10:01.343 22:26:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:01.343 22:26:25 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:01.343 22:26:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:01.343 22:26:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.343 22:26:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:01.343 ************************************ 00:10:01.343 START TEST nvmf_fused_ordering 00:10:01.343 ************************************ 00:10:01.343 22:26:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:01.602 * Looking for test storage... 00:10:01.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:01.602 22:26:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:06.902 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:06.902 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:06.902 Found net devices under 0000:86:00.0: cvl_0_0 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:06.902 Found net devices under 0000:86:00.1: cvl_0_1 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.902 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.903 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.903 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.903 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.903 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.903 22:26:29 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:10:06.903 00:10:06.903 --- 10.0.0.2 ping statistics --- 00:10:06.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.903 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:10:06.903 00:10:06.903 --- 10.0.0.1 ping statistics --- 00:10:06.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.903 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=4101277 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 4101277 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 4101277 ']' 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:06.903 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:06.903 [2024-07-15 22:26:30.201400] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:06.903 [2024-07-15 22:26:30.201441] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.903 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.903 [2024-07-15 22:26:30.257328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.903 [2024-07-15 22:26:30.336675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.903 [2024-07-15 22:26:30.336709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.903 [2024-07-15 22:26:30.336716] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.903 [2024-07-15 22:26:30.336722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.903 [2024-07-15 22:26:30.336727] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.903 [2024-07-15 22:26:30.336749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.163 22:26:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.163 22:26:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:07.163 22:26:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:07.163 22:26:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:07.163 22:26:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:07.163 [2024-07-15 22:26:31.036517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:07.163 [2024-07-15 22:26:31.052645] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:07.163 NULL1 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.163 22:26:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:07.163 [2024-07-15 22:26:31.107302] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:07.163 [2024-07-15 22:26:31.107345] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4101523 ] 00:10:07.163 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.733 Attached to nqn.2016-06.io.spdk:cnode1 00:10:07.733 Namespace ID: 1 size: 1GB 00:10:07.733 fused_ordering(0) 00:10:07.733 fused_ordering(1) 00:10:07.733 fused_ordering(2) 00:10:07.733 fused_ordering(3) 00:10:07.733 fused_ordering(4) 00:10:07.734 fused_ordering(5) 00:10:07.734 fused_ordering(6) 00:10:07.734 fused_ordering(7) 00:10:07.734 fused_ordering(8) 00:10:07.734 fused_ordering(9) 00:10:07.734 fused_ordering(10) 00:10:07.734 fused_ordering(11) 00:10:07.734 fused_ordering(12) 00:10:07.734 fused_ordering(13) 00:10:07.734 fused_ordering(14) 00:10:07.734 fused_ordering(15) 00:10:07.734 fused_ordering(16) 00:10:07.734 fused_ordering(17) 00:10:07.734 fused_ordering(18) 00:10:07.734 fused_ordering(19) 00:10:07.734 fused_ordering(20) 00:10:07.734 fused_ordering(21) 00:10:07.734 fused_ordering(22) 00:10:07.734 fused_ordering(23) 00:10:07.734 fused_ordering(24) 00:10:07.734 fused_ordering(25) 00:10:07.734 fused_ordering(26) 00:10:07.734 fused_ordering(27) 00:10:07.734 fused_ordering(28) 00:10:07.734 fused_ordering(29) 00:10:07.734 fused_ordering(30) 00:10:07.734 fused_ordering(31) 00:10:07.734 fused_ordering(32) 00:10:07.734 fused_ordering(33) 00:10:07.734 fused_ordering(34) 00:10:07.734 fused_ordering(35) 00:10:07.734 fused_ordering(36) 00:10:07.734 fused_ordering(37) 00:10:07.734 fused_ordering(38) 00:10:07.734 fused_ordering(39) 00:10:07.734 fused_ordering(40) 00:10:07.734 fused_ordering(41) 00:10:07.734 fused_ordering(42) 00:10:07.734 fused_ordering(43) 00:10:07.734 fused_ordering(44) 00:10:07.734 fused_ordering(45) 00:10:07.734 fused_ordering(46) 00:10:07.734 fused_ordering(47) 00:10:07.734 fused_ordering(48) 00:10:07.734 fused_ordering(49) 00:10:07.734 fused_ordering(50) 00:10:07.734 fused_ordering(51) 00:10:07.734 fused_ordering(52) 00:10:07.734 fused_ordering(53) 00:10:07.734 fused_ordering(54) 00:10:07.734 fused_ordering(55) 00:10:07.734 fused_ordering(56) 00:10:07.734 fused_ordering(57) 00:10:07.734 fused_ordering(58) 00:10:07.734 fused_ordering(59) 00:10:07.734 fused_ordering(60) 00:10:07.734 fused_ordering(61) 00:10:07.734 fused_ordering(62) 00:10:07.734 fused_ordering(63) 00:10:07.734 fused_ordering(64) 00:10:07.734 fused_ordering(65) 00:10:07.734 fused_ordering(66) 00:10:07.734 fused_ordering(67) 00:10:07.734 fused_ordering(68) 00:10:07.734 fused_ordering(69) 00:10:07.734 fused_ordering(70) 00:10:07.734 fused_ordering(71) 00:10:07.734 fused_ordering(72) 00:10:07.734 fused_ordering(73) 00:10:07.734 fused_ordering(74) 00:10:07.734 fused_ordering(75) 00:10:07.734 fused_ordering(76) 00:10:07.734 fused_ordering(77) 00:10:07.734 fused_ordering(78) 00:10:07.734 fused_ordering(79) 00:10:07.734 fused_ordering(80) 00:10:07.734 fused_ordering(81) 00:10:07.734 fused_ordering(82) 00:10:07.734 fused_ordering(83) 00:10:07.734 fused_ordering(84) 00:10:07.734 fused_ordering(85) 00:10:07.734 fused_ordering(86) 00:10:07.734 fused_ordering(87) 00:10:07.734 fused_ordering(88) 00:10:07.734 fused_ordering(89) 00:10:07.734 fused_ordering(90) 00:10:07.734 fused_ordering(91) 00:10:07.734 fused_ordering(92) 00:10:07.734 fused_ordering(93) 00:10:07.734 fused_ordering(94) 00:10:07.734 fused_ordering(95) 00:10:07.734 fused_ordering(96) 00:10:07.734 fused_ordering(97) 00:10:07.734 fused_ordering(98) 00:10:07.734 fused_ordering(99) 00:10:07.734 fused_ordering(100) 00:10:07.734 fused_ordering(101) 00:10:07.734 fused_ordering(102) 00:10:07.734 fused_ordering(103) 00:10:07.734 fused_ordering(104) 00:10:07.734 fused_ordering(105) 00:10:07.734 fused_ordering(106) 00:10:07.734 fused_ordering(107) 00:10:07.734 fused_ordering(108) 00:10:07.734 fused_ordering(109) 00:10:07.734 fused_ordering(110) 00:10:07.734 fused_ordering(111) 00:10:07.734 fused_ordering(112) 00:10:07.734 fused_ordering(113) 00:10:07.734 fused_ordering(114) 00:10:07.734 fused_ordering(115) 00:10:07.734 fused_ordering(116) 00:10:07.734 fused_ordering(117) 00:10:07.734 fused_ordering(118) 00:10:07.734 fused_ordering(119) 00:10:07.734 fused_ordering(120) 00:10:07.734 fused_ordering(121) 00:10:07.734 fused_ordering(122) 00:10:07.734 fused_ordering(123) 00:10:07.734 fused_ordering(124) 00:10:07.734 fused_ordering(125) 00:10:07.734 fused_ordering(126) 00:10:07.734 fused_ordering(127) 00:10:07.734 fused_ordering(128) 00:10:07.734 fused_ordering(129) 00:10:07.734 fused_ordering(130) 00:10:07.734 fused_ordering(131) 00:10:07.734 fused_ordering(132) 00:10:07.734 fused_ordering(133) 00:10:07.734 fused_ordering(134) 00:10:07.734 fused_ordering(135) 00:10:07.734 fused_ordering(136) 00:10:07.734 fused_ordering(137) 00:10:07.734 fused_ordering(138) 00:10:07.734 fused_ordering(139) 00:10:07.734 fused_ordering(140) 00:10:07.734 fused_ordering(141) 00:10:07.734 fused_ordering(142) 00:10:07.734 fused_ordering(143) 00:10:07.734 fused_ordering(144) 00:10:07.734 fused_ordering(145) 00:10:07.734 fused_ordering(146) 00:10:07.734 fused_ordering(147) 00:10:07.734 fused_ordering(148) 00:10:07.734 fused_ordering(149) 00:10:07.734 fused_ordering(150) 00:10:07.734 fused_ordering(151) 00:10:07.734 fused_ordering(152) 00:10:07.734 fused_ordering(153) 00:10:07.734 fused_ordering(154) 00:10:07.734 fused_ordering(155) 00:10:07.734 fused_ordering(156) 00:10:07.734 fused_ordering(157) 00:10:07.734 fused_ordering(158) 00:10:07.734 fused_ordering(159) 00:10:07.734 fused_ordering(160) 00:10:07.734 fused_ordering(161) 00:10:07.734 fused_ordering(162) 00:10:07.734 fused_ordering(163) 00:10:07.734 fused_ordering(164) 00:10:07.734 fused_ordering(165) 00:10:07.734 fused_ordering(166) 00:10:07.734 fused_ordering(167) 00:10:07.734 fused_ordering(168) 00:10:07.734 fused_ordering(169) 00:10:07.734 fused_ordering(170) 00:10:07.734 fused_ordering(171) 00:10:07.734 fused_ordering(172) 00:10:07.734 fused_ordering(173) 00:10:07.734 fused_ordering(174) 00:10:07.734 fused_ordering(175) 00:10:07.734 fused_ordering(176) 00:10:07.734 fused_ordering(177) 00:10:07.734 fused_ordering(178) 00:10:07.734 fused_ordering(179) 00:10:07.734 fused_ordering(180) 00:10:07.734 fused_ordering(181) 00:10:07.734 fused_ordering(182) 00:10:07.734 fused_ordering(183) 00:10:07.734 fused_ordering(184) 00:10:07.734 fused_ordering(185) 00:10:07.734 fused_ordering(186) 00:10:07.734 fused_ordering(187) 00:10:07.734 fused_ordering(188) 00:10:07.734 fused_ordering(189) 00:10:07.734 fused_ordering(190) 00:10:07.734 fused_ordering(191) 00:10:07.734 fused_ordering(192) 00:10:07.734 fused_ordering(193) 00:10:07.734 fused_ordering(194) 00:10:07.734 fused_ordering(195) 00:10:07.734 fused_ordering(196) 00:10:07.734 fused_ordering(197) 00:10:07.734 fused_ordering(198) 00:10:07.734 fused_ordering(199) 00:10:07.734 fused_ordering(200) 00:10:07.734 fused_ordering(201) 00:10:07.734 fused_ordering(202) 00:10:07.734 fused_ordering(203) 00:10:07.734 fused_ordering(204) 00:10:07.734 fused_ordering(205) 00:10:07.995 fused_ordering(206) 00:10:07.995 fused_ordering(207) 00:10:07.995 fused_ordering(208) 00:10:07.995 fused_ordering(209) 00:10:07.995 fused_ordering(210) 00:10:07.995 fused_ordering(211) 00:10:07.995 fused_ordering(212) 00:10:07.995 fused_ordering(213) 00:10:07.995 fused_ordering(214) 00:10:07.995 fused_ordering(215) 00:10:07.995 fused_ordering(216) 00:10:07.995 fused_ordering(217) 00:10:07.995 fused_ordering(218) 00:10:07.995 fused_ordering(219) 00:10:07.995 fused_ordering(220) 00:10:07.995 fused_ordering(221) 00:10:07.995 fused_ordering(222) 00:10:07.995 fused_ordering(223) 00:10:07.995 fused_ordering(224) 00:10:07.995 fused_ordering(225) 00:10:07.995 fused_ordering(226) 00:10:07.995 fused_ordering(227) 00:10:07.995 fused_ordering(228) 00:10:07.995 fused_ordering(229) 00:10:07.995 fused_ordering(230) 00:10:07.995 fused_ordering(231) 00:10:07.995 fused_ordering(232) 00:10:07.995 fused_ordering(233) 00:10:07.995 fused_ordering(234) 00:10:07.995 fused_ordering(235) 00:10:07.995 fused_ordering(236) 00:10:07.995 fused_ordering(237) 00:10:07.995 fused_ordering(238) 00:10:07.995 fused_ordering(239) 00:10:07.995 fused_ordering(240) 00:10:07.995 fused_ordering(241) 00:10:07.995 fused_ordering(242) 00:10:07.995 fused_ordering(243) 00:10:07.995 fused_ordering(244) 00:10:07.995 fused_ordering(245) 00:10:07.995 fused_ordering(246) 00:10:07.995 fused_ordering(247) 00:10:07.995 fused_ordering(248) 00:10:07.995 fused_ordering(249) 00:10:07.995 fused_ordering(250) 00:10:07.995 fused_ordering(251) 00:10:07.995 fused_ordering(252) 00:10:07.995 fused_ordering(253) 00:10:07.995 fused_ordering(254) 00:10:07.995 fused_ordering(255) 00:10:07.995 fused_ordering(256) 00:10:07.995 fused_ordering(257) 00:10:07.995 fused_ordering(258) 00:10:07.995 fused_ordering(259) 00:10:07.995 fused_ordering(260) 00:10:07.995 fused_ordering(261) 00:10:07.995 fused_ordering(262) 00:10:07.995 fused_ordering(263) 00:10:07.995 fused_ordering(264) 00:10:07.995 fused_ordering(265) 00:10:07.995 fused_ordering(266) 00:10:07.995 fused_ordering(267) 00:10:07.995 fused_ordering(268) 00:10:07.995 fused_ordering(269) 00:10:07.995 fused_ordering(270) 00:10:07.995 fused_ordering(271) 00:10:07.995 fused_ordering(272) 00:10:07.995 fused_ordering(273) 00:10:07.995 fused_ordering(274) 00:10:07.995 fused_ordering(275) 00:10:07.995 fused_ordering(276) 00:10:07.995 fused_ordering(277) 00:10:07.995 fused_ordering(278) 00:10:07.995 fused_ordering(279) 00:10:07.995 fused_ordering(280) 00:10:07.995 fused_ordering(281) 00:10:07.995 fused_ordering(282) 00:10:07.995 fused_ordering(283) 00:10:07.995 fused_ordering(284) 00:10:07.995 fused_ordering(285) 00:10:07.995 fused_ordering(286) 00:10:07.995 fused_ordering(287) 00:10:07.995 fused_ordering(288) 00:10:07.995 fused_ordering(289) 00:10:07.995 fused_ordering(290) 00:10:07.995 fused_ordering(291) 00:10:07.995 fused_ordering(292) 00:10:07.995 fused_ordering(293) 00:10:07.995 fused_ordering(294) 00:10:07.995 fused_ordering(295) 00:10:07.995 fused_ordering(296) 00:10:07.995 fused_ordering(297) 00:10:07.995 fused_ordering(298) 00:10:07.995 fused_ordering(299) 00:10:07.995 fused_ordering(300) 00:10:07.995 fused_ordering(301) 00:10:07.995 fused_ordering(302) 00:10:07.995 fused_ordering(303) 00:10:07.995 fused_ordering(304) 00:10:07.995 fused_ordering(305) 00:10:07.995 fused_ordering(306) 00:10:07.995 fused_ordering(307) 00:10:07.995 fused_ordering(308) 00:10:07.995 fused_ordering(309) 00:10:07.995 fused_ordering(310) 00:10:07.995 fused_ordering(311) 00:10:07.995 fused_ordering(312) 00:10:07.995 fused_ordering(313) 00:10:07.995 fused_ordering(314) 00:10:07.995 fused_ordering(315) 00:10:07.995 fused_ordering(316) 00:10:07.995 fused_ordering(317) 00:10:07.995 fused_ordering(318) 00:10:07.995 fused_ordering(319) 00:10:07.995 fused_ordering(320) 00:10:07.995 fused_ordering(321) 00:10:07.995 fused_ordering(322) 00:10:07.995 fused_ordering(323) 00:10:07.995 fused_ordering(324) 00:10:07.995 fused_ordering(325) 00:10:07.995 fused_ordering(326) 00:10:07.995 fused_ordering(327) 00:10:07.995 fused_ordering(328) 00:10:07.995 fused_ordering(329) 00:10:07.995 fused_ordering(330) 00:10:07.995 fused_ordering(331) 00:10:07.995 fused_ordering(332) 00:10:07.995 fused_ordering(333) 00:10:07.995 fused_ordering(334) 00:10:07.995 fused_ordering(335) 00:10:07.995 fused_ordering(336) 00:10:07.995 fused_ordering(337) 00:10:07.995 fused_ordering(338) 00:10:07.995 fused_ordering(339) 00:10:07.995 fused_ordering(340) 00:10:07.995 fused_ordering(341) 00:10:07.995 fused_ordering(342) 00:10:07.995 fused_ordering(343) 00:10:07.995 fused_ordering(344) 00:10:07.995 fused_ordering(345) 00:10:07.995 fused_ordering(346) 00:10:07.995 fused_ordering(347) 00:10:07.995 fused_ordering(348) 00:10:07.995 fused_ordering(349) 00:10:07.995 fused_ordering(350) 00:10:07.995 fused_ordering(351) 00:10:07.995 fused_ordering(352) 00:10:07.995 fused_ordering(353) 00:10:07.995 fused_ordering(354) 00:10:07.995 fused_ordering(355) 00:10:07.995 fused_ordering(356) 00:10:07.995 fused_ordering(357) 00:10:07.995 fused_ordering(358) 00:10:07.995 fused_ordering(359) 00:10:07.995 fused_ordering(360) 00:10:07.995 fused_ordering(361) 00:10:07.995 fused_ordering(362) 00:10:07.995 fused_ordering(363) 00:10:07.995 fused_ordering(364) 00:10:07.995 fused_ordering(365) 00:10:07.995 fused_ordering(366) 00:10:07.995 fused_ordering(367) 00:10:07.995 fused_ordering(368) 00:10:07.995 fused_ordering(369) 00:10:07.995 fused_ordering(370) 00:10:07.995 fused_ordering(371) 00:10:07.995 fused_ordering(372) 00:10:07.995 fused_ordering(373) 00:10:07.995 fused_ordering(374) 00:10:07.995 fused_ordering(375) 00:10:07.995 fused_ordering(376) 00:10:07.995 fused_ordering(377) 00:10:07.995 fused_ordering(378) 00:10:07.995 fused_ordering(379) 00:10:07.995 fused_ordering(380) 00:10:07.995 fused_ordering(381) 00:10:07.995 fused_ordering(382) 00:10:07.995 fused_ordering(383) 00:10:07.995 fused_ordering(384) 00:10:07.995 fused_ordering(385) 00:10:07.995 fused_ordering(386) 00:10:07.995 fused_ordering(387) 00:10:07.995 fused_ordering(388) 00:10:07.995 fused_ordering(389) 00:10:07.995 fused_ordering(390) 00:10:07.995 fused_ordering(391) 00:10:07.995 fused_ordering(392) 00:10:07.995 fused_ordering(393) 00:10:07.995 fused_ordering(394) 00:10:07.995 fused_ordering(395) 00:10:07.995 fused_ordering(396) 00:10:07.995 fused_ordering(397) 00:10:07.995 fused_ordering(398) 00:10:07.995 fused_ordering(399) 00:10:07.995 fused_ordering(400) 00:10:07.995 fused_ordering(401) 00:10:07.995 fused_ordering(402) 00:10:07.995 fused_ordering(403) 00:10:07.995 fused_ordering(404) 00:10:07.995 fused_ordering(405) 00:10:07.995 fused_ordering(406) 00:10:07.995 fused_ordering(407) 00:10:07.995 fused_ordering(408) 00:10:07.995 fused_ordering(409) 00:10:07.995 fused_ordering(410) 00:10:08.255 fused_ordering(411) 00:10:08.255 fused_ordering(412) 00:10:08.255 fused_ordering(413) 00:10:08.255 fused_ordering(414) 00:10:08.255 fused_ordering(415) 00:10:08.255 fused_ordering(416) 00:10:08.255 fused_ordering(417) 00:10:08.255 fused_ordering(418) 00:10:08.255 fused_ordering(419) 00:10:08.255 fused_ordering(420) 00:10:08.255 fused_ordering(421) 00:10:08.255 fused_ordering(422) 00:10:08.255 fused_ordering(423) 00:10:08.255 fused_ordering(424) 00:10:08.255 fused_ordering(425) 00:10:08.255 fused_ordering(426) 00:10:08.255 fused_ordering(427) 00:10:08.255 fused_ordering(428) 00:10:08.255 fused_ordering(429) 00:10:08.255 fused_ordering(430) 00:10:08.255 fused_ordering(431) 00:10:08.255 fused_ordering(432) 00:10:08.255 fused_ordering(433) 00:10:08.255 fused_ordering(434) 00:10:08.255 fused_ordering(435) 00:10:08.255 fused_ordering(436) 00:10:08.255 fused_ordering(437) 00:10:08.255 fused_ordering(438) 00:10:08.255 fused_ordering(439) 00:10:08.255 fused_ordering(440) 00:10:08.255 fused_ordering(441) 00:10:08.255 fused_ordering(442) 00:10:08.255 fused_ordering(443) 00:10:08.255 fused_ordering(444) 00:10:08.255 fused_ordering(445) 00:10:08.255 fused_ordering(446) 00:10:08.255 fused_ordering(447) 00:10:08.255 fused_ordering(448) 00:10:08.255 fused_ordering(449) 00:10:08.255 fused_ordering(450) 00:10:08.255 fused_ordering(451) 00:10:08.255 fused_ordering(452) 00:10:08.255 fused_ordering(453) 00:10:08.255 fused_ordering(454) 00:10:08.255 fused_ordering(455) 00:10:08.255 fused_ordering(456) 00:10:08.255 fused_ordering(457) 00:10:08.255 fused_ordering(458) 00:10:08.255 fused_ordering(459) 00:10:08.255 fused_ordering(460) 00:10:08.255 fused_ordering(461) 00:10:08.255 fused_ordering(462) 00:10:08.255 fused_ordering(463) 00:10:08.255 fused_ordering(464) 00:10:08.255 fused_ordering(465) 00:10:08.255 fused_ordering(466) 00:10:08.255 fused_ordering(467) 00:10:08.255 fused_ordering(468) 00:10:08.255 fused_ordering(469) 00:10:08.255 fused_ordering(470) 00:10:08.255 fused_ordering(471) 00:10:08.255 fused_ordering(472) 00:10:08.255 fused_ordering(473) 00:10:08.255 fused_ordering(474) 00:10:08.255 fused_ordering(475) 00:10:08.255 fused_ordering(476) 00:10:08.255 fused_ordering(477) 00:10:08.255 fused_ordering(478) 00:10:08.255 fused_ordering(479) 00:10:08.255 fused_ordering(480) 00:10:08.255 fused_ordering(481) 00:10:08.255 fused_ordering(482) 00:10:08.255 fused_ordering(483) 00:10:08.255 fused_ordering(484) 00:10:08.255 fused_ordering(485) 00:10:08.255 fused_ordering(486) 00:10:08.255 fused_ordering(487) 00:10:08.255 fused_ordering(488) 00:10:08.255 fused_ordering(489) 00:10:08.255 fused_ordering(490) 00:10:08.255 fused_ordering(491) 00:10:08.255 fused_ordering(492) 00:10:08.255 fused_ordering(493) 00:10:08.255 fused_ordering(494) 00:10:08.255 fused_ordering(495) 00:10:08.255 fused_ordering(496) 00:10:08.255 fused_ordering(497) 00:10:08.255 fused_ordering(498) 00:10:08.255 fused_ordering(499) 00:10:08.255 fused_ordering(500) 00:10:08.255 fused_ordering(501) 00:10:08.255 fused_ordering(502) 00:10:08.255 fused_ordering(503) 00:10:08.255 fused_ordering(504) 00:10:08.255 fused_ordering(505) 00:10:08.255 fused_ordering(506) 00:10:08.255 fused_ordering(507) 00:10:08.255 fused_ordering(508) 00:10:08.255 fused_ordering(509) 00:10:08.255 fused_ordering(510) 00:10:08.255 fused_ordering(511) 00:10:08.255 fused_ordering(512) 00:10:08.255 fused_ordering(513) 00:10:08.255 fused_ordering(514) 00:10:08.255 fused_ordering(515) 00:10:08.255 fused_ordering(516) 00:10:08.255 fused_ordering(517) 00:10:08.255 fused_ordering(518) 00:10:08.255 fused_ordering(519) 00:10:08.255 fused_ordering(520) 00:10:08.255 fused_ordering(521) 00:10:08.255 fused_ordering(522) 00:10:08.255 fused_ordering(523) 00:10:08.255 fused_ordering(524) 00:10:08.255 fused_ordering(525) 00:10:08.255 fused_ordering(526) 00:10:08.255 fused_ordering(527) 00:10:08.255 fused_ordering(528) 00:10:08.255 fused_ordering(529) 00:10:08.255 fused_ordering(530) 00:10:08.255 fused_ordering(531) 00:10:08.255 fused_ordering(532) 00:10:08.255 fused_ordering(533) 00:10:08.255 fused_ordering(534) 00:10:08.255 fused_ordering(535) 00:10:08.255 fused_ordering(536) 00:10:08.255 fused_ordering(537) 00:10:08.255 fused_ordering(538) 00:10:08.255 fused_ordering(539) 00:10:08.255 fused_ordering(540) 00:10:08.255 fused_ordering(541) 00:10:08.255 fused_ordering(542) 00:10:08.255 fused_ordering(543) 00:10:08.255 fused_ordering(544) 00:10:08.255 fused_ordering(545) 00:10:08.255 fused_ordering(546) 00:10:08.255 fused_ordering(547) 00:10:08.255 fused_ordering(548) 00:10:08.255 fused_ordering(549) 00:10:08.255 fused_ordering(550) 00:10:08.255 fused_ordering(551) 00:10:08.255 fused_ordering(552) 00:10:08.255 fused_ordering(553) 00:10:08.255 fused_ordering(554) 00:10:08.255 fused_ordering(555) 00:10:08.255 fused_ordering(556) 00:10:08.255 fused_ordering(557) 00:10:08.255 fused_ordering(558) 00:10:08.255 fused_ordering(559) 00:10:08.255 fused_ordering(560) 00:10:08.255 fused_ordering(561) 00:10:08.255 fused_ordering(562) 00:10:08.255 fused_ordering(563) 00:10:08.255 fused_ordering(564) 00:10:08.255 fused_ordering(565) 00:10:08.255 fused_ordering(566) 00:10:08.255 fused_ordering(567) 00:10:08.255 fused_ordering(568) 00:10:08.255 fused_ordering(569) 00:10:08.255 fused_ordering(570) 00:10:08.255 fused_ordering(571) 00:10:08.255 fused_ordering(572) 00:10:08.255 fused_ordering(573) 00:10:08.255 fused_ordering(574) 00:10:08.255 fused_ordering(575) 00:10:08.255 fused_ordering(576) 00:10:08.255 fused_ordering(577) 00:10:08.255 fused_ordering(578) 00:10:08.255 fused_ordering(579) 00:10:08.255 fused_ordering(580) 00:10:08.255 fused_ordering(581) 00:10:08.255 fused_ordering(582) 00:10:08.255 fused_ordering(583) 00:10:08.255 fused_ordering(584) 00:10:08.255 fused_ordering(585) 00:10:08.255 fused_ordering(586) 00:10:08.255 fused_ordering(587) 00:10:08.255 fused_ordering(588) 00:10:08.255 fused_ordering(589) 00:10:08.255 fused_ordering(590) 00:10:08.255 fused_ordering(591) 00:10:08.255 fused_ordering(592) 00:10:08.255 fused_ordering(593) 00:10:08.255 fused_ordering(594) 00:10:08.255 fused_ordering(595) 00:10:08.255 fused_ordering(596) 00:10:08.255 fused_ordering(597) 00:10:08.255 fused_ordering(598) 00:10:08.255 fused_ordering(599) 00:10:08.255 fused_ordering(600) 00:10:08.255 fused_ordering(601) 00:10:08.255 fused_ordering(602) 00:10:08.255 fused_ordering(603) 00:10:08.255 fused_ordering(604) 00:10:08.255 fused_ordering(605) 00:10:08.255 fused_ordering(606) 00:10:08.255 fused_ordering(607) 00:10:08.255 fused_ordering(608) 00:10:08.255 fused_ordering(609) 00:10:08.255 fused_ordering(610) 00:10:08.255 fused_ordering(611) 00:10:08.256 fused_ordering(612) 00:10:08.256 fused_ordering(613) 00:10:08.256 fused_ordering(614) 00:10:08.256 fused_ordering(615) 00:10:08.824 fused_ordering(616) 00:10:08.824 fused_ordering(617) 00:10:08.824 fused_ordering(618) 00:10:08.824 fused_ordering(619) 00:10:08.824 fused_ordering(620) 00:10:08.824 fused_ordering(621) 00:10:08.824 fused_ordering(622) 00:10:08.824 fused_ordering(623) 00:10:08.824 fused_ordering(624) 00:10:08.824 fused_ordering(625) 00:10:08.824 fused_ordering(626) 00:10:08.824 fused_ordering(627) 00:10:08.824 fused_ordering(628) 00:10:08.824 fused_ordering(629) 00:10:08.824 fused_ordering(630) 00:10:08.824 fused_ordering(631) 00:10:08.824 fused_ordering(632) 00:10:08.824 fused_ordering(633) 00:10:08.824 fused_ordering(634) 00:10:08.824 fused_ordering(635) 00:10:08.824 fused_ordering(636) 00:10:08.824 fused_ordering(637) 00:10:08.824 fused_ordering(638) 00:10:08.824 fused_ordering(639) 00:10:08.824 fused_ordering(640) 00:10:08.824 fused_ordering(641) 00:10:08.824 fused_ordering(642) 00:10:08.824 fused_ordering(643) 00:10:08.824 fused_ordering(644) 00:10:08.824 fused_ordering(645) 00:10:08.824 fused_ordering(646) 00:10:08.824 fused_ordering(647) 00:10:08.824 fused_ordering(648) 00:10:08.824 fused_ordering(649) 00:10:08.824 fused_ordering(650) 00:10:08.824 fused_ordering(651) 00:10:08.824 fused_ordering(652) 00:10:08.824 fused_ordering(653) 00:10:08.824 fused_ordering(654) 00:10:08.824 fused_ordering(655) 00:10:08.824 fused_ordering(656) 00:10:08.824 fused_ordering(657) 00:10:08.824 fused_ordering(658) 00:10:08.824 fused_ordering(659) 00:10:08.824 fused_ordering(660) 00:10:08.824 fused_ordering(661) 00:10:08.824 fused_ordering(662) 00:10:08.824 fused_ordering(663) 00:10:08.824 fused_ordering(664) 00:10:08.824 fused_ordering(665) 00:10:08.824 fused_ordering(666) 00:10:08.824 fused_ordering(667) 00:10:08.824 fused_ordering(668) 00:10:08.824 fused_ordering(669) 00:10:08.824 fused_ordering(670) 00:10:08.824 fused_ordering(671) 00:10:08.824 fused_ordering(672) 00:10:08.824 fused_ordering(673) 00:10:08.824 fused_ordering(674) 00:10:08.824 fused_ordering(675) 00:10:08.824 fused_ordering(676) 00:10:08.824 fused_ordering(677) 00:10:08.824 fused_ordering(678) 00:10:08.824 fused_ordering(679) 00:10:08.824 fused_ordering(680) 00:10:08.824 fused_ordering(681) 00:10:08.824 fused_ordering(682) 00:10:08.824 fused_ordering(683) 00:10:08.824 fused_ordering(684) 00:10:08.824 fused_ordering(685) 00:10:08.824 fused_ordering(686) 00:10:08.824 fused_ordering(687) 00:10:08.824 fused_ordering(688) 00:10:08.824 fused_ordering(689) 00:10:08.824 fused_ordering(690) 00:10:08.824 fused_ordering(691) 00:10:08.824 fused_ordering(692) 00:10:08.824 fused_ordering(693) 00:10:08.824 fused_ordering(694) 00:10:08.824 fused_ordering(695) 00:10:08.824 fused_ordering(696) 00:10:08.824 fused_ordering(697) 00:10:08.824 fused_ordering(698) 00:10:08.824 fused_ordering(699) 00:10:08.824 fused_ordering(700) 00:10:08.824 fused_ordering(701) 00:10:08.824 fused_ordering(702) 00:10:08.824 fused_ordering(703) 00:10:08.824 fused_ordering(704) 00:10:08.824 fused_ordering(705) 00:10:08.824 fused_ordering(706) 00:10:08.824 fused_ordering(707) 00:10:08.824 fused_ordering(708) 00:10:08.824 fused_ordering(709) 00:10:08.824 fused_ordering(710) 00:10:08.824 fused_ordering(711) 00:10:08.824 fused_ordering(712) 00:10:08.824 fused_ordering(713) 00:10:08.824 fused_ordering(714) 00:10:08.824 fused_ordering(715) 00:10:08.824 fused_ordering(716) 00:10:08.824 fused_ordering(717) 00:10:08.824 fused_ordering(718) 00:10:08.824 fused_ordering(719) 00:10:08.824 fused_ordering(720) 00:10:08.824 fused_ordering(721) 00:10:08.824 fused_ordering(722) 00:10:08.824 fused_ordering(723) 00:10:08.824 fused_ordering(724) 00:10:08.824 fused_ordering(725) 00:10:08.824 fused_ordering(726) 00:10:08.824 fused_ordering(727) 00:10:08.824 fused_ordering(728) 00:10:08.824 fused_ordering(729) 00:10:08.824 fused_ordering(730) 00:10:08.824 fused_ordering(731) 00:10:08.824 fused_ordering(732) 00:10:08.824 fused_ordering(733) 00:10:08.824 fused_ordering(734) 00:10:08.824 fused_ordering(735) 00:10:08.824 fused_ordering(736) 00:10:08.824 fused_ordering(737) 00:10:08.824 fused_ordering(738) 00:10:08.824 fused_ordering(739) 00:10:08.824 fused_ordering(740) 00:10:08.824 fused_ordering(741) 00:10:08.824 fused_ordering(742) 00:10:08.824 fused_ordering(743) 00:10:08.824 fused_ordering(744) 00:10:08.824 fused_ordering(745) 00:10:08.824 fused_ordering(746) 00:10:08.824 fused_ordering(747) 00:10:08.824 fused_ordering(748) 00:10:08.824 fused_ordering(749) 00:10:08.824 fused_ordering(750) 00:10:08.824 fused_ordering(751) 00:10:08.824 fused_ordering(752) 00:10:08.824 fused_ordering(753) 00:10:08.824 fused_ordering(754) 00:10:08.824 fused_ordering(755) 00:10:08.824 fused_ordering(756) 00:10:08.824 fused_ordering(757) 00:10:08.824 fused_ordering(758) 00:10:08.824 fused_ordering(759) 00:10:08.824 fused_ordering(760) 00:10:08.824 fused_ordering(761) 00:10:08.824 fused_ordering(762) 00:10:08.824 fused_ordering(763) 00:10:08.824 fused_ordering(764) 00:10:08.824 fused_ordering(765) 00:10:08.824 fused_ordering(766) 00:10:08.824 fused_ordering(767) 00:10:08.824 fused_ordering(768) 00:10:08.824 fused_ordering(769) 00:10:08.824 fused_ordering(770) 00:10:08.824 fused_ordering(771) 00:10:08.824 fused_ordering(772) 00:10:08.824 fused_ordering(773) 00:10:08.824 fused_ordering(774) 00:10:08.824 fused_ordering(775) 00:10:08.824 fused_ordering(776) 00:10:08.824 fused_ordering(777) 00:10:08.824 fused_ordering(778) 00:10:08.824 fused_ordering(779) 00:10:08.824 fused_ordering(780) 00:10:08.824 fused_ordering(781) 00:10:08.824 fused_ordering(782) 00:10:08.824 fused_ordering(783) 00:10:08.824 fused_ordering(784) 00:10:08.824 fused_ordering(785) 00:10:08.824 fused_ordering(786) 00:10:08.824 fused_ordering(787) 00:10:08.824 fused_ordering(788) 00:10:08.824 fused_ordering(789) 00:10:08.824 fused_ordering(790) 00:10:08.824 fused_ordering(791) 00:10:08.824 fused_ordering(792) 00:10:08.824 fused_ordering(793) 00:10:08.824 fused_ordering(794) 00:10:08.824 fused_ordering(795) 00:10:08.824 fused_ordering(796) 00:10:08.824 fused_ordering(797) 00:10:08.824 fused_ordering(798) 00:10:08.824 fused_ordering(799) 00:10:08.824 fused_ordering(800) 00:10:08.824 fused_ordering(801) 00:10:08.824 fused_ordering(802) 00:10:08.824 fused_ordering(803) 00:10:08.824 fused_ordering(804) 00:10:08.824 fused_ordering(805) 00:10:08.824 fused_ordering(806) 00:10:08.824 fused_ordering(807) 00:10:08.824 fused_ordering(808) 00:10:08.824 fused_ordering(809) 00:10:08.824 fused_ordering(810) 00:10:08.824 fused_ordering(811) 00:10:08.824 fused_ordering(812) 00:10:08.824 fused_ordering(813) 00:10:08.825 fused_ordering(814) 00:10:08.825 fused_ordering(815) 00:10:08.825 fused_ordering(816) 00:10:08.825 fused_ordering(817) 00:10:08.825 fused_ordering(818) 00:10:08.825 fused_ordering(819) 00:10:08.825 fused_ordering(820) 00:10:09.394 fused_ordering(821) 00:10:09.394 fused_ordering(822) 00:10:09.394 fused_ordering(823) 00:10:09.394 fused_ordering(824) 00:10:09.394 fused_ordering(825) 00:10:09.394 fused_ordering(826) 00:10:09.394 fused_ordering(827) 00:10:09.394 fused_ordering(828) 00:10:09.394 fused_ordering(829) 00:10:09.394 fused_ordering(830) 00:10:09.394 fused_ordering(831) 00:10:09.394 fused_ordering(832) 00:10:09.394 fused_ordering(833) 00:10:09.394 fused_ordering(834) 00:10:09.394 fused_ordering(835) 00:10:09.394 fused_ordering(836) 00:10:09.394 fused_ordering(837) 00:10:09.394 fused_ordering(838) 00:10:09.394 fused_ordering(839) 00:10:09.394 fused_ordering(840) 00:10:09.394 fused_ordering(841) 00:10:09.394 fused_ordering(842) 00:10:09.394 fused_ordering(843) 00:10:09.394 fused_ordering(844) 00:10:09.394 fused_ordering(845) 00:10:09.394 fused_ordering(846) 00:10:09.394 fused_ordering(847) 00:10:09.394 fused_ordering(848) 00:10:09.394 fused_ordering(849) 00:10:09.394 fused_ordering(850) 00:10:09.394 fused_ordering(851) 00:10:09.394 fused_ordering(852) 00:10:09.394 fused_ordering(853) 00:10:09.394 fused_ordering(854) 00:10:09.394 fused_ordering(855) 00:10:09.394 fused_ordering(856) 00:10:09.394 fused_ordering(857) 00:10:09.394 fused_ordering(858) 00:10:09.394 fused_ordering(859) 00:10:09.394 fused_ordering(860) 00:10:09.394 fused_ordering(861) 00:10:09.394 fused_ordering(862) 00:10:09.394 fused_ordering(863) 00:10:09.394 fused_ordering(864) 00:10:09.394 fused_ordering(865) 00:10:09.394 fused_ordering(866) 00:10:09.394 fused_ordering(867) 00:10:09.394 fused_ordering(868) 00:10:09.394 fused_ordering(869) 00:10:09.394 fused_ordering(870) 00:10:09.394 fused_ordering(871) 00:10:09.394 fused_ordering(872) 00:10:09.394 fused_ordering(873) 00:10:09.394 fused_ordering(874) 00:10:09.394 fused_ordering(875) 00:10:09.394 fused_ordering(876) 00:10:09.394 fused_ordering(877) 00:10:09.394 fused_ordering(878) 00:10:09.394 fused_ordering(879) 00:10:09.394 fused_ordering(880) 00:10:09.394 fused_ordering(881) 00:10:09.394 fused_ordering(882) 00:10:09.394 fused_ordering(883) 00:10:09.394 fused_ordering(884) 00:10:09.394 fused_ordering(885) 00:10:09.394 fused_ordering(886) 00:10:09.394 fused_ordering(887) 00:10:09.394 fused_ordering(888) 00:10:09.394 fused_ordering(889) 00:10:09.394 fused_ordering(890) 00:10:09.394 fused_ordering(891) 00:10:09.394 fused_ordering(892) 00:10:09.394 fused_ordering(893) 00:10:09.394 fused_ordering(894) 00:10:09.394 fused_ordering(895) 00:10:09.394 fused_ordering(896) 00:10:09.394 fused_ordering(897) 00:10:09.394 fused_ordering(898) 00:10:09.394 fused_ordering(899) 00:10:09.394 fused_ordering(900) 00:10:09.394 fused_ordering(901) 00:10:09.394 fused_ordering(902) 00:10:09.394 fused_ordering(903) 00:10:09.394 fused_ordering(904) 00:10:09.394 fused_ordering(905) 00:10:09.394 fused_ordering(906) 00:10:09.394 fused_ordering(907) 00:10:09.394 fused_ordering(908) 00:10:09.394 fused_ordering(909) 00:10:09.394 fused_ordering(910) 00:10:09.394 fused_ordering(911) 00:10:09.394 fused_ordering(912) 00:10:09.394 fused_ordering(913) 00:10:09.394 fused_ordering(914) 00:10:09.394 fused_ordering(915) 00:10:09.394 fused_ordering(916) 00:10:09.394 fused_ordering(917) 00:10:09.394 fused_ordering(918) 00:10:09.394 fused_ordering(919) 00:10:09.394 fused_ordering(920) 00:10:09.394 fused_ordering(921) 00:10:09.394 fused_ordering(922) 00:10:09.394 fused_ordering(923) 00:10:09.394 fused_ordering(924) 00:10:09.394 fused_ordering(925) 00:10:09.394 fused_ordering(926) 00:10:09.394 fused_ordering(927) 00:10:09.394 fused_ordering(928) 00:10:09.394 fused_ordering(929) 00:10:09.394 fused_ordering(930) 00:10:09.394 fused_ordering(931) 00:10:09.394 fused_ordering(932) 00:10:09.394 fused_ordering(933) 00:10:09.394 fused_ordering(934) 00:10:09.394 fused_ordering(935) 00:10:09.394 fused_ordering(936) 00:10:09.394 fused_ordering(937) 00:10:09.394 fused_ordering(938) 00:10:09.394 fused_ordering(939) 00:10:09.394 fused_ordering(940) 00:10:09.394 fused_ordering(941) 00:10:09.394 fused_ordering(942) 00:10:09.394 fused_ordering(943) 00:10:09.394 fused_ordering(944) 00:10:09.394 fused_ordering(945) 00:10:09.394 fused_ordering(946) 00:10:09.394 fused_ordering(947) 00:10:09.394 fused_ordering(948) 00:10:09.394 fused_ordering(949) 00:10:09.394 fused_ordering(950) 00:10:09.394 fused_ordering(951) 00:10:09.394 fused_ordering(952) 00:10:09.394 fused_ordering(953) 00:10:09.394 fused_ordering(954) 00:10:09.394 fused_ordering(955) 00:10:09.394 fused_ordering(956) 00:10:09.394 fused_ordering(957) 00:10:09.394 fused_ordering(958) 00:10:09.394 fused_ordering(959) 00:10:09.394 fused_ordering(960) 00:10:09.394 fused_ordering(961) 00:10:09.394 fused_ordering(962) 00:10:09.394 fused_ordering(963) 00:10:09.394 fused_ordering(964) 00:10:09.394 fused_ordering(965) 00:10:09.394 fused_ordering(966) 00:10:09.394 fused_ordering(967) 00:10:09.394 fused_ordering(968) 00:10:09.394 fused_ordering(969) 00:10:09.394 fused_ordering(970) 00:10:09.394 fused_ordering(971) 00:10:09.394 fused_ordering(972) 00:10:09.394 fused_ordering(973) 00:10:09.394 fused_ordering(974) 00:10:09.394 fused_ordering(975) 00:10:09.394 fused_ordering(976) 00:10:09.394 fused_ordering(977) 00:10:09.394 fused_ordering(978) 00:10:09.394 fused_ordering(979) 00:10:09.394 fused_ordering(980) 00:10:09.394 fused_ordering(981) 00:10:09.394 fused_ordering(982) 00:10:09.394 fused_ordering(983) 00:10:09.394 fused_ordering(984) 00:10:09.394 fused_ordering(985) 00:10:09.394 fused_ordering(986) 00:10:09.394 fused_ordering(987) 00:10:09.394 fused_ordering(988) 00:10:09.394 fused_ordering(989) 00:10:09.394 fused_ordering(990) 00:10:09.394 fused_ordering(991) 00:10:09.394 fused_ordering(992) 00:10:09.394 fused_ordering(993) 00:10:09.394 fused_ordering(994) 00:10:09.394 fused_ordering(995) 00:10:09.394 fused_ordering(996) 00:10:09.394 fused_ordering(997) 00:10:09.394 fused_ordering(998) 00:10:09.394 fused_ordering(999) 00:10:09.394 fused_ordering(1000) 00:10:09.394 fused_ordering(1001) 00:10:09.394 fused_ordering(1002) 00:10:09.394 fused_ordering(1003) 00:10:09.394 fused_ordering(1004) 00:10:09.394 fused_ordering(1005) 00:10:09.394 fused_ordering(1006) 00:10:09.394 fused_ordering(1007) 00:10:09.394 fused_ordering(1008) 00:10:09.394 fused_ordering(1009) 00:10:09.394 fused_ordering(1010) 00:10:09.394 fused_ordering(1011) 00:10:09.394 fused_ordering(1012) 00:10:09.394 fused_ordering(1013) 00:10:09.394 fused_ordering(1014) 00:10:09.394 fused_ordering(1015) 00:10:09.394 fused_ordering(1016) 00:10:09.394 fused_ordering(1017) 00:10:09.394 fused_ordering(1018) 00:10:09.394 fused_ordering(1019) 00:10:09.394 fused_ordering(1020) 00:10:09.394 fused_ordering(1021) 00:10:09.394 fused_ordering(1022) 00:10:09.394 fused_ordering(1023) 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.394 rmmod nvme_tcp 00:10:09.394 rmmod nvme_fabrics 00:10:09.394 rmmod nvme_keyring 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 4101277 ']' 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 4101277 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 4101277 ']' 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 4101277 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4101277 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4101277' 00:10:09.394 killing process with pid 4101277 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 4101277 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 4101277 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.394 22:26:33 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.928 22:26:35 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.928 00:10:11.928 real 0m10.112s 00:10:11.928 user 0m5.232s 00:10:11.928 sys 0m5.231s 00:10:11.928 22:26:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:11.928 22:26:35 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:11.928 ************************************ 00:10:11.928 END TEST nvmf_fused_ordering 00:10:11.928 ************************************ 00:10:11.928 22:26:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:11.928 22:26:35 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:11.928 22:26:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.928 22:26:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.928 22:26:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:11.928 ************************************ 00:10:11.928 START TEST nvmf_delete_subsystem 00:10:11.928 ************************************ 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:11.928 * Looking for test storage... 00:10:11.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:11.928 22:26:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:17.223 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:17.223 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.223 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:17.224 Found net devices under 0000:86:00.0: cvl_0_0 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:17.224 Found net devices under 0000:86:00.1: cvl_0_1 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:17.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:10:17.224 00:10:17.224 --- 10.0.0.2 ping statistics --- 00:10:17.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.224 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:10:17.224 00:10:17.224 --- 10.0.0.1 ping statistics --- 00:10:17.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.224 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4105264 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4105264 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 4105264 ']' 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.224 22:26:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:17.224 [2024-07-15 22:26:40.553178] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:17.224 [2024-07-15 22:26:40.553221] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.224 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.224 [2024-07-15 22:26:40.610550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:17.224 [2024-07-15 22:26:40.690123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.224 [2024-07-15 22:26:40.690158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.224 [2024-07-15 22:26:40.690165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.224 [2024-07-15 22:26:40.690171] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.224 [2024-07-15 22:26:40.690176] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.224 [2024-07-15 22:26:40.690215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.224 [2024-07-15 22:26:40.690219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.483 [2024-07-15 22:26:41.393270] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.483 [2024-07-15 22:26:41.409387] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.483 NULL1 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.483 Delay0 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4105310 00:10:17.483 22:26:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:17.742 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.742 [2024-07-15 22:26:41.473900] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:19.642 22:26:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:19.642 22:26:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.642 22:26:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 starting I/O failed: -6 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 starting I/O failed: -6 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 starting I/O failed: -6 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 starting I/O failed: -6 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 starting I/O failed: -6 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 starting I/O failed: -6 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 starting I/O failed: -6 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 starting I/O failed: -6 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 starting I/O failed: -6 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Read completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.900 Write completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 [2024-07-15 22:26:43.683771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c5c0 is same with the state(5) to be set 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 [2024-07-15 22:26:43.684695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc62000d600 is same with the state(5) to be set 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 starting I/O failed: -6 00:10:19.901 [2024-07-15 22:26:43.685030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc620000c00 is same with the state(5) to be set 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:19.901 Write completed with error (sct=0, sc=8) 00:10:19.901 Read completed with error (sct=0, sc=8) 00:10:20.837 [2024-07-15 22:26:44.651668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64dac0 is same with the state(5) to be set 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 [2024-07-15 22:26:44.685590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc62000d2f0 is same with the state(5) to be set 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Write completed with error (sct=0, sc=8) 00:10:20.837 Read completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 [2024-07-15 22:26:44.687911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c000 is same with the state(5) to be set 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 [2024-07-15 22:26:44.688157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c3e0 is same with the state(5) to be set 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Read completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 Write completed with error (sct=0, sc=8) 00:10:20.838 [2024-07-15 22:26:44.688302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64c7a0 is same with the state(5) to be set 00:10:20.838 Initializing NVMe Controllers 00:10:20.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:20.838 Controller IO queue size 128, less than required. 00:10:20.838 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:20.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:20.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:20.838 Initialization complete. Launching workers. 00:10:20.838 ======================================================== 00:10:20.838 Latency(us) 00:10:20.838 Device Information : IOPS MiB/s Average min max 00:10:20.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 181.25 0.09 955940.53 1765.75 1012746.90 00:10:20.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.88 0.08 865316.55 266.93 1012439.45 00:10:20.838 ======================================================== 00:10:20.838 Total : 344.13 0.17 913047.79 266.93 1012746.90 00:10:20.838 00:10:20.838 [2024-07-15 22:26:44.688812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64dac0 (9): Bad file descriptor 00:10:20.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:20.838 22:26:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:20.838 22:26:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:20.838 22:26:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4105310 00:10:20.838 22:26:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4105310 00:10:21.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4105310) - No such process 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4105310 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 4105310 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 4105310 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:21.406 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:21.407 [2024-07-15 22:26:45.215006] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4105986 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4105986 00:10:21.407 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:21.407 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.407 [2024-07-15 22:26:45.266248] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:21.975 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:21.975 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4105986 00:10:21.975 22:26:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:22.542 22:26:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:22.542 22:26:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4105986 00:10:22.542 22:26:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:22.838 22:26:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:22.838 22:26:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4105986 00:10:22.838 22:26:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:23.407 22:26:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:23.407 22:26:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4105986 00:10:23.407 22:26:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:23.976 22:26:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:23.976 22:26:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4105986 00:10:23.976 22:26:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:24.546 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:24.546 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4105986 00:10:24.546 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:24.546 Initializing NVMe Controllers 00:10:24.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:24.546 Controller IO queue size 128, less than required. 00:10:24.546 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:24.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:24.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:24.546 Initialization complete. Launching workers. 00:10:24.546 ======================================================== 00:10:24.546 Latency(us) 00:10:24.546 Device Information : IOPS MiB/s Average min max 00:10:24.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003776.88 1000170.52 1042687.95 00:10:24.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005036.91 1000264.62 1013571.99 00:10:24.546 ======================================================== 00:10:24.546 Total : 256.00 0.12 1004406.89 1000170.52 1042687.95 00:10:24.546 00:10:24.806 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:24.806 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4105986 00:10:24.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4105986) - No such process 00:10:24.806 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4105986 00:10:24.806 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:24.806 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:24.806 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:24.806 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:24.806 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:24.806 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:24.806 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.806 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:24.806 rmmod nvme_tcp 00:10:25.065 rmmod nvme_fabrics 00:10:25.065 rmmod nvme_keyring 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4105264 ']' 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4105264 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 4105264 ']' 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 4105264 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4105264 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4105264' 00:10:25.065 killing process with pid 4105264 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 4105264 00:10:25.065 22:26:48 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 4105264 00:10:25.324 22:26:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:25.324 22:26:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:25.324 22:26:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:25.324 22:26:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:25.324 22:26:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:25.324 22:26:49 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.324 22:26:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.324 22:26:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.232 22:26:51 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:27.232 00:10:27.232 real 0m15.645s 00:10:27.232 user 0m30.159s 00:10:27.232 sys 0m4.544s 00:10:27.232 22:26:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.232 22:26:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.232 ************************************ 00:10:27.232 END TEST nvmf_delete_subsystem 00:10:27.232 ************************************ 00:10:27.232 22:26:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:27.232 22:26:51 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:27.232 22:26:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:27.232 22:26:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.232 22:26:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:27.232 ************************************ 00:10:27.232 START TEST nvmf_ns_masking 00:10:27.232 ************************************ 00:10:27.232 22:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:27.492 * Looking for test storage... 00:10:27.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.492 22:26:51 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=2be9922e-7eb0-45b4-830e-66d2f7381d89 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=e16869b7-7850-48a6-8185-c8386d344382 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=acd12c68-c84f-4c12-8c8c-15f6a402b9ad 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:27.493 22:26:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:32.851 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:32.851 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:32.851 Found net devices under 0000:86:00.0: cvl_0_0 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:32.851 Found net devices under 0000:86:00.1: cvl_0_1 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:32.851 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:32.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:10:32.852 00:10:32.852 --- 10.0.0.2 ping statistics --- 00:10:32.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.852 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:10:32.852 00:10:32.852 --- 10.0.0.1 ping statistics --- 00:10:32.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.852 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4109979 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4109979 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4109979 ']' 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.852 22:26:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:32.852 [2024-07-15 22:26:56.604965] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:32.852 [2024-07-15 22:26:56.605015] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.852 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.852 [2024-07-15 22:26:56.661403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.852 [2024-07-15 22:26:56.734846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.852 [2024-07-15 22:26:56.734885] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.852 [2024-07-15 22:26:56.734892] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.852 [2024-07-15 22:26:56.734898] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.852 [2024-07-15 22:26:56.734903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.852 [2024-07-15 22:26:56.734928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.790 22:26:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.790 22:26:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:33.790 22:26:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:33.790 22:26:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:33.790 22:26:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:33.790 22:26:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.790 22:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:33.790 [2024-07-15 22:26:57.611154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.790 22:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:33.790 22:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:33.790 22:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:34.049 Malloc1 00:10:34.049 22:26:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:34.049 Malloc2 00:10:34.049 22:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:34.308 22:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:34.567 22:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.567 [2024-07-15 22:26:58.517416] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.567 22:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:34.567 22:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I acd12c68-c84f-4c12-8c8c-15f6a402b9ad -a 10.0.0.2 -s 4420 -i 4 00:10:34.827 22:26:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:34.827 22:26:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:34.827 22:26:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:34.827 22:26:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:34.827 22:26:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:37.363 [ 0]:0x1 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5eb63eb57ba4496aa32992ad75e4c362 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5eb63eb57ba4496aa32992ad75e4c362 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:37.363 22:27:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:37.363 [ 0]:0x1 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5eb63eb57ba4496aa32992ad75e4c362 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5eb63eb57ba4496aa32992ad75e4c362 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:37.363 [ 1]:0x2 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3bdbe1709f6c43078228935062d894c8 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3bdbe1709f6c43078228935062d894c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.363 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.622 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:37.881 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:10:37.881 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I acd12c68-c84f-4c12-8c8c-15f6a402b9ad -a 10.0.0.2 -s 4420 -i 4 00:10:38.139 22:27:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:38.139 22:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:38.139 22:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:38.139 22:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:10:38.139 22:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:10:38.139 22:27:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:40.041 22:27:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:40.041 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:40.041 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:40.298 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:40.298 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:40.298 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:40.298 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:40.298 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:40.298 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:40.299 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:10:40.299 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:40.299 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:40.299 [ 0]:0x2 00:10:40.299 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:40.299 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:40.299 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3bdbe1709f6c43078228935062d894c8 00:10:40.299 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3bdbe1709f6c43078228935062d894c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:40.299 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:40.556 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:10:40.556 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:40.556 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:40.556 [ 0]:0x1 00:10:40.556 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:40.556 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:40.557 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5eb63eb57ba4496aa32992ad75e4c362 00:10:40.557 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5eb63eb57ba4496aa32992ad75e4c362 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:40.557 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:10:40.557 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:40.557 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:40.557 [ 1]:0x2 00:10:40.557 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:40.557 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:40.557 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3bdbe1709f6c43078228935062d894c8 00:10:40.557 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3bdbe1709f6c43078228935062d894c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:40.557 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:40.815 [ 0]:0x2 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3bdbe1709f6c43078228935062d894c8 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3bdbe1709f6c43078228935062d894c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:40.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.815 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:41.086 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:10:41.086 22:27:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I acd12c68-c84f-4c12-8c8c-15f6a402b9ad -a 10.0.0.2 -s 4420 -i 4 00:10:41.344 22:27:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:41.344 22:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:41.344 22:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.344 22:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:41.344 22:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:41.344 22:27:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:43.245 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:43.245 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:43.245 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.245 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:43.245 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.245 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:43.245 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:43.246 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:43.504 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:43.505 [ 0]:0x1 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5eb63eb57ba4496aa32992ad75e4c362 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5eb63eb57ba4496aa32992ad75e4c362 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:43.505 [ 1]:0x2 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3bdbe1709f6c43078228935062d894c8 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3bdbe1709f6c43078228935062d894c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:43.505 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:43.764 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:43.765 [ 0]:0x2 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3bdbe1709f6c43078228935062d894c8 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3bdbe1709f6c43078228935062d894c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:43.765 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:44.023 [2024-07-15 22:27:07.896238] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:44.023 request: 00:10:44.023 { 00:10:44.023 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:44.023 "nsid": 2, 00:10:44.023 "host": "nqn.2016-06.io.spdk:host1", 00:10:44.023 "method": "nvmf_ns_remove_host", 00:10:44.023 "req_id": 1 00:10:44.023 } 00:10:44.023 Got JSON-RPC error response 00:10:44.023 response: 00:10:44.023 { 00:10:44.023 "code": -32602, 00:10:44.023 "message": "Invalid parameters" 00:10:44.023 } 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:44.023 22:27:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:44.282 [ 0]:0x2 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3bdbe1709f6c43078228935062d894c8 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3bdbe1709f6c43078228935062d894c8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:44.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=4112350 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 4112350 /var/tmp/host.sock 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 4112350 ']' 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:44.282 22:27:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:44.283 22:27:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:44.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:44.283 22:27:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:44.283 22:27:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:44.283 [2024-07-15 22:27:08.250094] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:44.283 [2024-07-15 22:27:08.250141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4112350 ] 00:10:44.542 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.542 [2024-07-15 22:27:08.304847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.542 [2024-07-15 22:27:08.377933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.111 22:27:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.111 22:27:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:45.111 22:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.370 22:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:45.628 22:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 2be9922e-7eb0-45b4-830e-66d2f7381d89 00:10:45.628 22:27:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:45.628 22:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 2BE9922E7EB045B4830E66D2F7381D89 -i 00:10:45.628 22:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid e16869b7-7850-48a6-8185-c8386d344382 00:10:45.628 22:27:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:45.628 22:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g E16869B7785048A68185C8386D344382 -i 00:10:45.888 22:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:46.147 22:27:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:46.407 22:27:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:46.407 22:27:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:46.666 nvme0n1 00:10:46.666 22:27:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:46.666 22:27:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:46.925 nvme1n2 00:10:46.925 22:27:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:46.925 22:27:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:46.925 22:27:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:46.925 22:27:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:46.925 22:27:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:47.184 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:47.184 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:47.184 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:47.184 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 2be9922e-7eb0-45b4-830e-66d2f7381d89 == \2\b\e\9\9\2\2\e\-\7\e\b\0\-\4\5\b\4\-\8\3\0\e\-\6\6\d\2\f\7\3\8\1\d\8\9 ]] 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ e16869b7-7850-48a6-8185-c8386d344382 == \e\1\6\8\6\9\b\7\-\7\8\5\0\-\4\8\a\6\-\8\1\8\5\-\c\8\3\8\6\d\3\4\4\3\8\2 ]] 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 4112350 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4112350 ']' 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4112350 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4112350 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4112350' 00:10:47.442 killing process with pid 4112350 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4112350 00:10:47.442 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4112350 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:48.007 rmmod nvme_tcp 00:10:48.007 rmmod nvme_fabrics 00:10:48.007 rmmod nvme_keyring 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4109979 ']' 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4109979 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 4109979 ']' 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 4109979 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:48.007 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4109979 00:10:48.267 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:48.267 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:48.267 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4109979' 00:10:48.267 killing process with pid 4109979 00:10:48.267 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 4109979 00:10:48.267 22:27:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 4109979 00:10:48.267 22:27:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:48.267 22:27:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:48.267 22:27:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:48.267 22:27:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:48.267 22:27:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:48.267 22:27:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.267 22:27:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:48.267 22:27:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.834 22:27:14 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:50.834 00:10:50.834 real 0m23.097s 00:10:50.834 user 0m25.151s 00:10:50.834 sys 0m6.053s 00:10:50.834 22:27:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:50.834 22:27:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:50.834 ************************************ 00:10:50.834 END TEST nvmf_ns_masking 00:10:50.834 ************************************ 00:10:50.834 22:27:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:50.834 22:27:14 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:10:50.834 22:27:14 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:50.834 22:27:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:50.834 22:27:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.834 22:27:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.834 ************************************ 00:10:50.834 START TEST nvmf_nvme_cli 00:10:50.834 ************************************ 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:10:50.834 * Looking for test storage... 00:10:50.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:10:50.834 22:27:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:56.109 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:56.110 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:56.110 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:56.110 Found net devices under 0000:86:00.0: cvl_0_0 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:56.110 Found net devices under 0000:86:00.1: cvl_0_1 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:56.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:10:56.110 00:10:56.110 --- 10.0.0.2 ping statistics --- 00:10:56.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.110 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:10:56.110 00:10:56.110 --- 10.0.0.1 ping statistics --- 00:10:56.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.110 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4116736 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4116736 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 4116736 ']' 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:56.110 22:27:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:56.110 [2024-07-15 22:27:19.596521] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:56.110 [2024-07-15 22:27:19.596563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.110 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.110 [2024-07-15 22:27:19.652770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.110 [2024-07-15 22:27:19.734175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.110 [2024-07-15 22:27:19.734208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.110 [2024-07-15 22:27:19.734215] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.110 [2024-07-15 22:27:19.734221] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.110 [2024-07-15 22:27:19.734229] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.110 [2024-07-15 22:27:19.734276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.110 [2024-07-15 22:27:19.734292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.110 [2024-07-15 22:27:19.734310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.110 [2024-07-15 22:27:19.734312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:56.679 [2024-07-15 22:27:20.454310] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:56.679 Malloc0 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:56.679 Malloc1 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:56.679 [2024-07-15 22:27:20.531628] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.679 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:56.679 00:10:56.679 Discovery Log Number of Records 2, Generation counter 2 00:10:56.679 =====Discovery Log Entry 0====== 00:10:56.679 trtype: tcp 00:10:56.680 adrfam: ipv4 00:10:56.680 subtype: current discovery subsystem 00:10:56.680 treq: not required 00:10:56.680 portid: 0 00:10:56.680 trsvcid: 4420 00:10:56.680 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:56.680 traddr: 10.0.0.2 00:10:56.680 eflags: explicit discovery connections, duplicate discovery information 00:10:56.680 sectype: none 00:10:56.680 =====Discovery Log Entry 1====== 00:10:56.680 trtype: tcp 00:10:56.680 adrfam: ipv4 00:10:56.680 subtype: nvme subsystem 00:10:56.680 treq: not required 00:10:56.680 portid: 0 00:10:56.680 trsvcid: 4420 00:10:56.680 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:56.680 traddr: 10.0.0.2 00:10:56.680 eflags: none 00:10:56.680 sectype: none 00:10:56.680 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:10:56.680 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:10:56.680 22:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:10:56.680 22:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:56.680 22:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:10:56.680 22:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:10:56.680 22:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:56.680 22:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:10:56.680 22:27:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:10:56.680 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:10:56.680 22:27:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:58.056 22:27:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:58.056 22:27:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:10:58.056 22:27:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.056 22:27:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:58.056 22:27:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:58.056 22:27:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:00.017 /dev/nvme0n1 ]] 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:00.017 22:27:23 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:00.276 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:00.276 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:00.276 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:00.276 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:00.276 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:00.276 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:00.276 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:00.276 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:00.276 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:00.276 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:00.276 22:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:00.276 22:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:00.536 rmmod nvme_tcp 00:11:00.536 rmmod nvme_fabrics 00:11:00.536 rmmod nvme_keyring 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4116736 ']' 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4116736 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 4116736 ']' 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 4116736 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4116736 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4116736' 00:11:00.536 killing process with pid 4116736 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 4116736 00:11:00.536 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 4116736 00:11:00.796 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:00.796 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:00.796 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:00.796 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:00.796 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:00.796 22:27:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.796 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:00.796 22:27:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.334 22:27:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:03.334 00:11:03.334 real 0m12.406s 00:11:03.334 user 0m21.202s 00:11:03.334 sys 0m4.323s 00:11:03.334 22:27:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:03.334 22:27:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.334 ************************************ 00:11:03.335 END TEST nvmf_nvme_cli 00:11:03.335 ************************************ 00:11:03.335 22:27:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:03.335 22:27:26 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:03.335 22:27:26 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:03.335 22:27:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:03.335 22:27:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.335 22:27:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:03.335 ************************************ 00:11:03.335 START TEST nvmf_vfio_user 00:11:03.335 ************************************ 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:03.335 * Looking for test storage... 00:11:03.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4118022 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4118022' 00:11:03.335 Process pid: 4118022 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4118022 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 4118022 ']' 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.335 22:27:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:03.335 [2024-07-15 22:27:26.975302] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:11:03.335 [2024-07-15 22:27:26.975358] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.335 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.335 [2024-07-15 22:27:27.031251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.335 [2024-07-15 22:27:27.111674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.335 [2024-07-15 22:27:27.111709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.335 [2024-07-15 22:27:27.111715] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.335 [2024-07-15 22:27:27.111721] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.335 [2024-07-15 22:27:27.111726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.335 [2024-07-15 22:27:27.111764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.335 [2024-07-15 22:27:27.111783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.335 [2024-07-15 22:27:27.111863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.335 [2024-07-15 22:27:27.111865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.904 22:27:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:03.904 22:27:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:03.904 22:27:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:04.841 22:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:05.100 22:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:05.100 22:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:05.100 22:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:05.100 22:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:05.100 22:27:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:05.359 Malloc1 00:11:05.359 22:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:05.619 22:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:05.619 22:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:05.877 22:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:05.877 22:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:05.877 22:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:06.136 Malloc2 00:11:06.136 22:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:06.136 22:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:06.395 22:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:06.656 22:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:06.656 22:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:06.656 22:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:06.656 22:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:06.656 22:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:06.656 22:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:06.656 [2024-07-15 22:27:30.510464] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:11:06.656 [2024-07-15 22:27:30.510501] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4118730 ] 00:11:06.656 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.656 [2024-07-15 22:27:30.540748] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:06.656 [2024-07-15 22:27:30.550595] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:06.656 [2024-07-15 22:27:30.550613] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9b96e2e000 00:11:06.656 [2024-07-15 22:27:30.551592] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:06.656 [2024-07-15 22:27:30.552590] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:06.656 [2024-07-15 22:27:30.553597] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:06.656 [2024-07-15 22:27:30.554603] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:06.656 [2024-07-15 22:27:30.555608] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:06.656 [2024-07-15 22:27:30.556616] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:06.656 [2024-07-15 22:27:30.557625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:06.656 [2024-07-15 22:27:30.558627] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:06.656 [2024-07-15 22:27:30.559638] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:06.656 [2024-07-15 22:27:30.559646] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9b96e23000 00:11:06.656 [2024-07-15 22:27:30.560587] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:06.656 [2024-07-15 22:27:30.569195] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:06.656 [2024-07-15 22:27:30.569216] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:06.656 [2024-07-15 22:27:30.574730] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:06.656 [2024-07-15 22:27:30.574766] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:06.656 [2024-07-15 22:27:30.574841] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:06.656 [2024-07-15 22:27:30.574859] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:06.656 [2024-07-15 22:27:30.574864] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:06.656 [2024-07-15 22:27:30.575730] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:06.656 [2024-07-15 22:27:30.575738] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:06.656 [2024-07-15 22:27:30.575745] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:06.656 [2024-07-15 22:27:30.576734] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:06.656 [2024-07-15 22:27:30.576742] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:06.656 [2024-07-15 22:27:30.576748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:06.656 [2024-07-15 22:27:30.577741] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:06.656 [2024-07-15 22:27:30.577749] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:06.656 [2024-07-15 22:27:30.578741] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:06.656 [2024-07-15 22:27:30.578749] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:06.656 [2024-07-15 22:27:30.578753] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:06.656 [2024-07-15 22:27:30.578759] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:06.656 [2024-07-15 22:27:30.578864] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:06.656 [2024-07-15 22:27:30.578868] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:06.656 [2024-07-15 22:27:30.578875] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:06.656 [2024-07-15 22:27:30.579749] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:06.656 [2024-07-15 22:27:30.580752] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:06.656 [2024-07-15 22:27:30.581758] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:06.656 [2024-07-15 22:27:30.582756] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:06.656 [2024-07-15 22:27:30.582818] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:06.656 [2024-07-15 22:27:30.583768] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:06.656 [2024-07-15 22:27:30.583775] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:06.656 [2024-07-15 22:27:30.583780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:06.656 [2024-07-15 22:27:30.583796] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:06.656 [2024-07-15 22:27:30.583804] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:06.656 [2024-07-15 22:27:30.583818] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:06.656 [2024-07-15 22:27:30.583823] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:06.656 [2024-07-15 22:27:30.583836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:06.656 [2024-07-15 22:27:30.583871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:06.656 [2024-07-15 22:27:30.583880] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:06.656 [2024-07-15 22:27:30.583886] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:06.656 [2024-07-15 22:27:30.583890] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:06.656 [2024-07-15 22:27:30.583895] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:06.656 [2024-07-15 22:27:30.583899] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:06.656 [2024-07-15 22:27:30.583903] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:06.656 [2024-07-15 22:27:30.583907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.583914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.583923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:06.657 [2024-07-15 22:27:30.583937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:06.657 [2024-07-15 22:27:30.583951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.657 [2024-07-15 22:27:30.583960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.657 [2024-07-15 22:27:30.583968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.657 [2024-07-15 22:27:30.583975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.657 [2024-07-15 22:27:30.583979] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.583987] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.583995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:06.657 [2024-07-15 22:27:30.584001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:06.657 [2024-07-15 22:27:30.584006] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:06.657 [2024-07-15 22:27:30.584011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584017] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584022] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:06.657 [2024-07-15 22:27:30.584044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:06.657 [2024-07-15 22:27:30.584093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584107] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:06.657 [2024-07-15 22:27:30.584111] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:06.657 [2024-07-15 22:27:30.584117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:06.657 [2024-07-15 22:27:30.584128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:06.657 [2024-07-15 22:27:30.584136] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:06.657 [2024-07-15 22:27:30.584149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584162] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:06.657 [2024-07-15 22:27:30.584166] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:06.657 [2024-07-15 22:27:30.584171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:06.657 [2024-07-15 22:27:30.584190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:06.657 [2024-07-15 22:27:30.584203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584210] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584216] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:06.657 [2024-07-15 22:27:30.584220] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:06.657 [2024-07-15 22:27:30.584230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:06.657 [2024-07-15 22:27:30.584241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:06.657 [2024-07-15 22:27:30.584249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584255] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584280] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:06.657 [2024-07-15 22:27:30.584284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:06.657 [2024-07-15 22:27:30.584288] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:06.657 [2024-07-15 22:27:30.584307] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:06.657 [2024-07-15 22:27:30.584318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:06.657 [2024-07-15 22:27:30.584328] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:06.657 [2024-07-15 22:27:30.584334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:06.657 [2024-07-15 22:27:30.584343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:06.657 [2024-07-15 22:27:30.584357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:06.657 [2024-07-15 22:27:30.584367] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:06.657 [2024-07-15 22:27:30.584376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:06.657 [2024-07-15 22:27:30.584389] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:06.657 [2024-07-15 22:27:30.584394] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:06.657 [2024-07-15 22:27:30.584397] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:06.657 [2024-07-15 22:27:30.584401] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:06.657 [2024-07-15 22:27:30.584406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:06.657 [2024-07-15 22:27:30.584412] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:06.657 [2024-07-15 22:27:30.584416] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:06.657 [2024-07-15 22:27:30.584421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:06.657 [2024-07-15 22:27:30.584427] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:06.657 [2024-07-15 22:27:30.584431] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:06.657 [2024-07-15 22:27:30.584437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:06.657 [2024-07-15 22:27:30.584443] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:06.657 [2024-07-15 22:27:30.584447] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:06.658 [2024-07-15 22:27:30.584452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:06.658 [2024-07-15 22:27:30.584458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:06.658 [2024-07-15 22:27:30.584469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:06.658 [2024-07-15 22:27:30.584478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:06.658 [2024-07-15 22:27:30.584485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:06.658 ===================================================== 00:11:06.658 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:06.658 ===================================================== 00:11:06.658 Controller Capabilities/Features 00:11:06.658 ================================ 00:11:06.658 Vendor ID: 4e58 00:11:06.658 Subsystem Vendor ID: 4e58 00:11:06.658 Serial Number: SPDK1 00:11:06.658 Model Number: SPDK bdev Controller 00:11:06.658 Firmware Version: 24.09 00:11:06.658 Recommended Arb Burst: 6 00:11:06.658 IEEE OUI Identifier: 8d 6b 50 00:11:06.658 Multi-path I/O 00:11:06.658 May have multiple subsystem ports: Yes 00:11:06.658 May have multiple controllers: Yes 00:11:06.658 Associated with SR-IOV VF: No 00:11:06.658 Max Data Transfer Size: 131072 00:11:06.658 Max Number of Namespaces: 32 00:11:06.658 Max Number of I/O Queues: 127 00:11:06.658 NVMe Specification Version (VS): 1.3 00:11:06.658 NVMe Specification Version (Identify): 1.3 00:11:06.658 Maximum Queue Entries: 256 00:11:06.658 Contiguous Queues Required: Yes 00:11:06.658 Arbitration Mechanisms Supported 00:11:06.658 Weighted Round Robin: Not Supported 00:11:06.658 Vendor Specific: Not Supported 00:11:06.658 Reset Timeout: 15000 ms 00:11:06.658 Doorbell Stride: 4 bytes 00:11:06.658 NVM Subsystem Reset: Not Supported 00:11:06.658 Command Sets Supported 00:11:06.658 NVM Command Set: Supported 00:11:06.658 Boot Partition: Not Supported 00:11:06.658 Memory Page Size Minimum: 4096 bytes 00:11:06.658 Memory Page Size Maximum: 4096 bytes 00:11:06.658 Persistent Memory Region: Not Supported 00:11:06.658 Optional Asynchronous Events Supported 00:11:06.658 Namespace Attribute Notices: Supported 00:11:06.658 Firmware Activation Notices: Not Supported 00:11:06.658 ANA Change Notices: Not Supported 00:11:06.658 PLE Aggregate Log Change Notices: Not Supported 00:11:06.658 LBA Status Info Alert Notices: Not Supported 00:11:06.658 EGE Aggregate Log Change Notices: Not Supported 00:11:06.658 Normal NVM Subsystem Shutdown event: Not Supported 00:11:06.658 Zone Descriptor Change Notices: Not Supported 00:11:06.658 Discovery Log Change Notices: Not Supported 00:11:06.658 Controller Attributes 00:11:06.658 128-bit Host Identifier: Supported 00:11:06.658 Non-Operational Permissive Mode: Not Supported 00:11:06.658 NVM Sets: Not Supported 00:11:06.658 Read Recovery Levels: Not Supported 00:11:06.658 Endurance Groups: Not Supported 00:11:06.658 Predictable Latency Mode: Not Supported 00:11:06.658 Traffic Based Keep ALive: Not Supported 00:11:06.658 Namespace Granularity: Not Supported 00:11:06.658 SQ Associations: Not Supported 00:11:06.658 UUID List: Not Supported 00:11:06.658 Multi-Domain Subsystem: Not Supported 00:11:06.658 Fixed Capacity Management: Not Supported 00:11:06.658 Variable Capacity Management: Not Supported 00:11:06.658 Delete Endurance Group: Not Supported 00:11:06.658 Delete NVM Set: Not Supported 00:11:06.658 Extended LBA Formats Supported: Not Supported 00:11:06.658 Flexible Data Placement Supported: Not Supported 00:11:06.658 00:11:06.658 Controller Memory Buffer Support 00:11:06.658 ================================ 00:11:06.658 Supported: No 00:11:06.658 00:11:06.658 Persistent Memory Region Support 00:11:06.658 ================================ 00:11:06.658 Supported: No 00:11:06.658 00:11:06.658 Admin Command Set Attributes 00:11:06.658 ============================ 00:11:06.658 Security Send/Receive: Not Supported 00:11:06.658 Format NVM: Not Supported 00:11:06.658 Firmware Activate/Download: Not Supported 00:11:06.658 Namespace Management: Not Supported 00:11:06.658 Device Self-Test: Not Supported 00:11:06.658 Directives: Not Supported 00:11:06.658 NVMe-MI: Not Supported 00:11:06.658 Virtualization Management: Not Supported 00:11:06.658 Doorbell Buffer Config: Not Supported 00:11:06.658 Get LBA Status Capability: Not Supported 00:11:06.658 Command & Feature Lockdown Capability: Not Supported 00:11:06.658 Abort Command Limit: 4 00:11:06.658 Async Event Request Limit: 4 00:11:06.658 Number of Firmware Slots: N/A 00:11:06.658 Firmware Slot 1 Read-Only: N/A 00:11:06.658 Firmware Activation Without Reset: N/A 00:11:06.658 Multiple Update Detection Support: N/A 00:11:06.658 Firmware Update Granularity: No Information Provided 00:11:06.658 Per-Namespace SMART Log: No 00:11:06.658 Asymmetric Namespace Access Log Page: Not Supported 00:11:06.658 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:06.658 Command Effects Log Page: Supported 00:11:06.658 Get Log Page Extended Data: Supported 00:11:06.658 Telemetry Log Pages: Not Supported 00:11:06.658 Persistent Event Log Pages: Not Supported 00:11:06.658 Supported Log Pages Log Page: May Support 00:11:06.658 Commands Supported & Effects Log Page: Not Supported 00:11:06.658 Feature Identifiers & Effects Log Page:May Support 00:11:06.658 NVMe-MI Commands & Effects Log Page: May Support 00:11:06.658 Data Area 4 for Telemetry Log: Not Supported 00:11:06.658 Error Log Page Entries Supported: 128 00:11:06.658 Keep Alive: Supported 00:11:06.658 Keep Alive Granularity: 10000 ms 00:11:06.658 00:11:06.658 NVM Command Set Attributes 00:11:06.658 ========================== 00:11:06.658 Submission Queue Entry Size 00:11:06.658 Max: 64 00:11:06.658 Min: 64 00:11:06.658 Completion Queue Entry Size 00:11:06.658 Max: 16 00:11:06.658 Min: 16 00:11:06.658 Number of Namespaces: 32 00:11:06.658 Compare Command: Supported 00:11:06.658 Write Uncorrectable Command: Not Supported 00:11:06.658 Dataset Management Command: Supported 00:11:06.658 Write Zeroes Command: Supported 00:11:06.658 Set Features Save Field: Not Supported 00:11:06.658 Reservations: Not Supported 00:11:06.658 Timestamp: Not Supported 00:11:06.658 Copy: Supported 00:11:06.658 Volatile Write Cache: Present 00:11:06.658 Atomic Write Unit (Normal): 1 00:11:06.658 Atomic Write Unit (PFail): 1 00:11:06.658 Atomic Compare & Write Unit: 1 00:11:06.658 Fused Compare & Write: Supported 00:11:06.658 Scatter-Gather List 00:11:06.658 SGL Command Set: Supported (Dword aligned) 00:11:06.658 SGL Keyed: Not Supported 00:11:06.658 SGL Bit Bucket Descriptor: Not Supported 00:11:06.658 SGL Metadata Pointer: Not Supported 00:11:06.658 Oversized SGL: Not Supported 00:11:06.658 SGL Metadata Address: Not Supported 00:11:06.658 SGL Offset: Not Supported 00:11:06.658 Transport SGL Data Block: Not Supported 00:11:06.658 Replay Protected Memory Block: Not Supported 00:11:06.658 00:11:06.658 Firmware Slot Information 00:11:06.659 ========================= 00:11:06.659 Active slot: 1 00:11:06.659 Slot 1 Firmware Revision: 24.09 00:11:06.659 00:11:06.659 00:11:06.659 Commands Supported and Effects 00:11:06.659 ============================== 00:11:06.659 Admin Commands 00:11:06.659 -------------- 00:11:06.659 Get Log Page (02h): Supported 00:11:06.659 Identify (06h): Supported 00:11:06.659 Abort (08h): Supported 00:11:06.659 Set Features (09h): Supported 00:11:06.659 Get Features (0Ah): Supported 00:11:06.659 Asynchronous Event Request (0Ch): Supported 00:11:06.659 Keep Alive (18h): Supported 00:11:06.659 I/O Commands 00:11:06.659 ------------ 00:11:06.659 Flush (00h): Supported LBA-Change 00:11:06.659 Write (01h): Supported LBA-Change 00:11:06.659 Read (02h): Supported 00:11:06.659 Compare (05h): Supported 00:11:06.659 Write Zeroes (08h): Supported LBA-Change 00:11:06.659 Dataset Management (09h): Supported LBA-Change 00:11:06.659 Copy (19h): Supported LBA-Change 00:11:06.659 00:11:06.659 Error Log 00:11:06.659 ========= 00:11:06.659 00:11:06.659 Arbitration 00:11:06.659 =========== 00:11:06.659 Arbitration Burst: 1 00:11:06.659 00:11:06.659 Power Management 00:11:06.659 ================ 00:11:06.659 Number of Power States: 1 00:11:06.659 Current Power State: Power State #0 00:11:06.659 Power State #0: 00:11:06.659 Max Power: 0.00 W 00:11:06.659 Non-Operational State: Operational 00:11:06.659 Entry Latency: Not Reported 00:11:06.659 Exit Latency: Not Reported 00:11:06.659 Relative Read Throughput: 0 00:11:06.659 Relative Read Latency: 0 00:11:06.659 Relative Write Throughput: 0 00:11:06.659 Relative Write Latency: 0 00:11:06.659 Idle Power: Not Reported 00:11:06.659 Active Power: Not Reported 00:11:06.659 Non-Operational Permissive Mode: Not Supported 00:11:06.659 00:11:06.659 Health Information 00:11:06.659 ================== 00:11:06.659 Critical Warnings: 00:11:06.659 Available Spare Space: OK 00:11:06.659 Temperature: OK 00:11:06.659 Device Reliability: OK 00:11:06.659 Read Only: No 00:11:06.659 Volatile Memory Backup: OK 00:11:06.659 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:06.659 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:06.659 Available Spare: 0% 00:11:06.659 Available Sp[2024-07-15 22:27:30.584575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:06.659 [2024-07-15 22:27:30.584584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:06.659 [2024-07-15 22:27:30.584612] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:06.659 [2024-07-15 22:27:30.584620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.659 [2024-07-15 22:27:30.584626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.659 [2024-07-15 22:27:30.584631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.659 [2024-07-15 22:27:30.584637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.659 [2024-07-15 22:27:30.588235] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:06.659 [2024-07-15 22:27:30.588246] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:06.659 [2024-07-15 22:27:30.588791] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:06.659 [2024-07-15 22:27:30.588836] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:06.659 [2024-07-15 22:27:30.588844] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:06.659 [2024-07-15 22:27:30.589803] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:06.659 [2024-07-15 22:27:30.589814] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:06.659 [2024-07-15 22:27:30.589861] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:06.659 [2024-07-15 22:27:30.591835] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:06.919 are Threshold: 0% 00:11:06.919 Life Percentage Used: 0% 00:11:06.919 Data Units Read: 0 00:11:06.919 Data Units Written: 0 00:11:06.919 Host Read Commands: 0 00:11:06.919 Host Write Commands: 0 00:11:06.919 Controller Busy Time: 0 minutes 00:11:06.919 Power Cycles: 0 00:11:06.919 Power On Hours: 0 hours 00:11:06.919 Unsafe Shutdowns: 0 00:11:06.919 Unrecoverable Media Errors: 0 00:11:06.919 Lifetime Error Log Entries: 0 00:11:06.919 Warning Temperature Time: 0 minutes 00:11:06.919 Critical Temperature Time: 0 minutes 00:11:06.919 00:11:06.919 Number of Queues 00:11:06.919 ================ 00:11:06.919 Number of I/O Submission Queues: 127 00:11:06.919 Number of I/O Completion Queues: 127 00:11:06.919 00:11:06.919 Active Namespaces 00:11:06.919 ================= 00:11:06.919 Namespace ID:1 00:11:06.919 Error Recovery Timeout: Unlimited 00:11:06.919 Command Set Identifier: NVM (00h) 00:11:06.919 Deallocate: Supported 00:11:06.919 Deallocated/Unwritten Error: Not Supported 00:11:06.919 Deallocated Read Value: Unknown 00:11:06.919 Deallocate in Write Zeroes: Not Supported 00:11:06.919 Deallocated Guard Field: 0xFFFF 00:11:06.919 Flush: Supported 00:11:06.919 Reservation: Supported 00:11:06.919 Namespace Sharing Capabilities: Multiple Controllers 00:11:06.919 Size (in LBAs): 131072 (0GiB) 00:11:06.919 Capacity (in LBAs): 131072 (0GiB) 00:11:06.919 Utilization (in LBAs): 131072 (0GiB) 00:11:06.919 NGUID: 60DAAA6A2D2C4992B7AAB62D8023977B 00:11:06.919 UUID: 60daaa6a-2d2c-4992-b7aa-b62d8023977b 00:11:06.919 Thin Provisioning: Not Supported 00:11:06.919 Per-NS Atomic Units: Yes 00:11:06.919 Atomic Boundary Size (Normal): 0 00:11:06.919 Atomic Boundary Size (PFail): 0 00:11:06.919 Atomic Boundary Offset: 0 00:11:06.919 Maximum Single Source Range Length: 65535 00:11:06.919 Maximum Copy Length: 65535 00:11:06.919 Maximum Source Range Count: 1 00:11:06.919 NGUID/EUI64 Never Reused: No 00:11:06.919 Namespace Write Protected: No 00:11:06.919 Number of LBA Formats: 1 00:11:06.919 Current LBA Format: LBA Format #00 00:11:06.919 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:06.919 00:11:06.919 22:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:06.919 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.919 [2024-07-15 22:27:30.808031] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:12.201 Initializing NVMe Controllers 00:11:12.201 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:12.201 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:12.201 Initialization complete. Launching workers. 00:11:12.201 ======================================================== 00:11:12.201 Latency(us) 00:11:12.201 Device Information : IOPS MiB/s Average min max 00:11:12.201 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39944.86 156.03 3204.23 962.34 6620.85 00:11:12.201 ======================================================== 00:11:12.201 Total : 39944.86 156.03 3204.23 962.34 6620.85 00:11:12.201 00:11:12.201 [2024-07-15 22:27:35.827963] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:12.202 22:27:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:12.202 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.202 [2024-07-15 22:27:36.046975] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:17.476 Initializing NVMe Controllers 00:11:17.476 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:17.476 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:17.476 Initialization complete. Launching workers. 00:11:17.476 ======================================================== 00:11:17.476 Latency(us) 00:11:17.476 Device Information : IOPS MiB/s Average min max 00:11:17.476 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.28 62.65 7979.73 5979.22 9978.85 00:11:17.476 ======================================================== 00:11:17.476 Total : 16039.28 62.65 7979.73 5979.22 9978.85 00:11:17.476 00:11:17.476 [2024-07-15 22:27:41.080812] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:17.476 22:27:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:17.476 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.476 [2024-07-15 22:27:41.268765] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:22.839 [2024-07-15 22:27:46.362595] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:22.839 Initializing NVMe Controllers 00:11:22.839 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:22.839 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:22.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:22.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:22.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:22.839 Initialization complete. Launching workers. 00:11:22.839 Starting thread on core 2 00:11:22.839 Starting thread on core 3 00:11:22.839 Starting thread on core 1 00:11:22.839 22:27:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:22.839 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.839 [2024-07-15 22:27:46.644597] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:26.159 [2024-07-15 22:27:49.712328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:26.159 Initializing NVMe Controllers 00:11:26.159 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:26.159 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:26.159 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:26.159 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:26.159 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:26.159 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:26.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:26.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:26.159 Initialization complete. Launching workers. 00:11:26.159 Starting thread on core 1 with urgent priority queue 00:11:26.159 Starting thread on core 2 with urgent priority queue 00:11:26.159 Starting thread on core 3 with urgent priority queue 00:11:26.159 Starting thread on core 0 with urgent priority queue 00:11:26.159 SPDK bdev Controller (SPDK1 ) core 0: 8145.33 IO/s 12.28 secs/100000 ios 00:11:26.159 SPDK bdev Controller (SPDK1 ) core 1: 7448.00 IO/s 13.43 secs/100000 ios 00:11:26.159 SPDK bdev Controller (SPDK1 ) core 2: 9769.33 IO/s 10.24 secs/100000 ios 00:11:26.159 SPDK bdev Controller (SPDK1 ) core 3: 7893.33 IO/s 12.67 secs/100000 ios 00:11:26.159 ======================================================== 00:11:26.159 00:11:26.159 22:27:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:26.159 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.159 [2024-07-15 22:27:49.983687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:26.159 Initializing NVMe Controllers 00:11:26.159 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:26.159 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:26.159 Namespace ID: 1 size: 0GB 00:11:26.159 Initialization complete. 00:11:26.159 INFO: using host memory buffer for IO 00:11:26.159 Hello world! 00:11:26.159 [2024-07-15 22:27:50.019925] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:26.159 22:27:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:26.159 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.418 [2024-07-15 22:27:50.290664] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:27.355 Initializing NVMe Controllers 00:11:27.355 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:27.355 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:27.355 Initialization complete. Launching workers. 00:11:27.355 submit (in ns) avg, min, max = 6191.3, 3300.9, 3999584.3 00:11:27.355 complete (in ns) avg, min, max = 20193.3, 1804.3, 3999556.5 00:11:27.355 00:11:27.355 Submit histogram 00:11:27.355 ================ 00:11:27.355 Range in us Cumulative Count 00:11:27.355 3.297 - 3.311: 0.0306% ( 5) 00:11:27.355 3.311 - 3.325: 0.1467% ( 19) 00:11:27.355 3.325 - 3.339: 0.4950% ( 57) 00:11:27.355 3.339 - 3.353: 2.2979% ( 295) 00:11:27.355 3.353 - 3.367: 6.2886% ( 653) 00:11:27.355 3.367 - 3.381: 12.3266% ( 988) 00:11:27.355 3.381 - 3.395: 18.4441% ( 1001) 00:11:27.355 3.395 - 3.409: 25.0382% ( 1079) 00:11:27.355 3.409 - 3.423: 30.5384% ( 900) 00:11:27.355 3.423 - 3.437: 35.7392% ( 851) 00:11:27.355 3.437 - 3.450: 40.9644% ( 855) 00:11:27.355 3.450 - 3.464: 45.9696% ( 819) 00:11:27.355 3.464 - 3.478: 50.6753% ( 770) 00:11:27.355 3.478 - 3.492: 55.7905% ( 837) 00:11:27.355 3.492 - 3.506: 62.3358% ( 1071) 00:11:27.355 3.506 - 3.520: 67.7810% ( 891) 00:11:27.355 3.520 - 3.534: 72.3584% ( 749) 00:11:27.355 3.534 - 3.548: 77.7241% ( 878) 00:11:27.355 3.548 - 3.562: 81.6232% ( 638) 00:11:27.355 3.562 - 3.590: 86.0539% ( 725) 00:11:27.355 3.590 - 3.617: 87.5328% ( 242) 00:11:27.355 3.617 - 3.645: 88.1073% ( 94) 00:11:27.355 3.645 - 3.673: 89.5679% ( 239) 00:11:27.355 3.673 - 3.701: 91.2302% ( 272) 00:11:27.355 3.701 - 3.729: 93.0147% ( 292) 00:11:27.355 3.729 - 3.757: 94.6709% ( 271) 00:11:27.355 3.757 - 3.784: 96.4126% ( 285) 00:11:27.355 3.784 - 3.812: 97.7694% ( 222) 00:11:27.355 3.812 - 3.840: 98.5272% ( 124) 00:11:27.355 3.840 - 3.868: 98.9550% ( 70) 00:11:27.355 3.868 - 3.896: 99.2972% ( 56) 00:11:27.355 3.896 - 3.923: 99.4928% ( 32) 00:11:27.355 3.923 - 3.951: 99.5355% ( 7) 00:11:27.355 3.951 - 3.979: 99.5600% ( 4) 00:11:27.355 3.979 - 4.007: 99.5661% ( 1) 00:11:27.355 4.007 - 4.035: 99.5844% ( 3) 00:11:27.355 4.035 - 4.063: 99.5905% ( 1) 00:11:27.355 4.090 - 4.118: 99.5967% ( 1) 00:11:27.355 4.146 - 4.174: 99.6028% ( 1) 00:11:27.355 4.285 - 4.313: 99.6089% ( 1) 00:11:27.355 4.313 - 4.341: 99.6150% ( 1) 00:11:27.355 4.397 - 4.424: 99.6211% ( 1) 00:11:27.355 5.148 - 5.176: 99.6272% ( 1) 00:11:27.355 5.231 - 5.259: 99.6394% ( 2) 00:11:27.355 5.259 - 5.287: 99.6455% ( 1) 00:11:27.355 5.343 - 5.370: 99.6517% ( 1) 00:11:27.355 5.370 - 5.398: 99.6578% ( 1) 00:11:27.355 5.482 - 5.510: 99.6639% ( 1) 00:11:27.355 5.510 - 5.537: 99.6700% ( 1) 00:11:27.355 5.565 - 5.593: 99.6761% ( 1) 00:11:27.355 5.621 - 5.649: 99.6883% ( 2) 00:11:27.355 5.649 - 5.677: 99.7005% ( 2) 00:11:27.355 5.677 - 5.704: 99.7128% ( 2) 00:11:27.355 5.760 - 5.788: 99.7189% ( 1) 00:11:27.355 5.983 - 6.010: 99.7311% ( 2) 00:11:27.355 6.038 - 6.066: 99.7372% ( 1) 00:11:27.355 6.066 - 6.094: 99.7494% ( 2) 00:11:27.355 6.094 - 6.122: 99.7617% ( 2) 00:11:27.355 6.150 - 6.177: 99.7678% ( 1) 00:11:27.355 6.261 - 6.289: 99.7739% ( 1) 00:11:27.355 6.344 - 6.372: 99.7800% ( 1) 00:11:27.355 6.428 - 6.456: 99.7861% ( 1) 00:11:27.355 6.595 - 6.623: 99.7922% ( 1) 00:11:27.355 6.623 - 6.650: 99.7983% ( 1) 00:11:27.355 6.650 - 6.678: 99.8044% ( 1) 00:11:27.355 6.734 - 6.762: 99.8105% ( 1) 00:11:27.355 6.929 - 6.957: 99.8228% ( 2) 00:11:27.355 7.068 - 7.096: 99.8289% ( 1) 00:11:27.355 7.123 - 7.179: 99.8350% ( 1) 00:11:27.355 7.179 - 7.235: 99.8472% ( 2) 00:11:27.355 7.346 - 7.402: 99.8533% ( 1) 00:11:27.355 7.402 - 7.457: 99.8594% ( 1) 00:11:27.355 7.457 - 7.513: 99.8656% ( 1) 00:11:27.355 7.513 - 7.569: 99.8717% ( 1) 00:11:27.355 7.569 - 7.624: 99.8778% ( 1) 00:11:27.355 7.624 - 7.680: 99.8839% ( 1) 00:11:27.355 7.680 - 7.736: 99.8900% ( 1) 00:11:27.355 7.736 - 7.791: 99.8961% ( 1) 00:11:27.355 7.791 - 7.847: 99.9022% ( 1) 00:11:27.355 [2024-07-15 22:27:51.320519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:27.614 7.958 - 8.014: 99.9083% ( 1) 00:11:27.614 8.292 - 8.348: 99.9144% ( 1) 00:11:27.614 8.515 - 8.570: 99.9206% ( 1) 00:11:27.614 11.798 - 11.854: 99.9267% ( 1) 00:11:27.614 41.183 - 41.405: 99.9328% ( 1) 00:11:27.614 3989.148 - 4017.642: 100.0000% ( 11) 00:11:27.614 00:11:27.614 Complete histogram 00:11:27.614 ================== 00:11:27.614 Range in us Cumulative Count 00:11:27.614 1.795 - 1.809: 0.0428% ( 7) 00:11:27.615 1.809 - 1.823: 2.2856% ( 367) 00:11:27.615 1.823 - 1.837: 6.8203% ( 742) 00:11:27.615 1.837 - 1.850: 8.5987% ( 291) 00:11:27.615 1.850 - 1.864: 26.9999% ( 3011) 00:11:27.615 1.864 - 1.878: 77.1680% ( 8209) 00:11:27.615 1.878 - 1.892: 91.7436% ( 2385) 00:11:27.615 1.892 - 1.906: 95.0559% ( 542) 00:11:27.615 1.906 - 1.920: 96.1010% ( 171) 00:11:27.615 1.920 - 1.934: 96.6693% ( 93) 00:11:27.615 1.934 - 1.948: 97.9160% ( 204) 00:11:27.615 1.948 - 1.962: 98.8755% ( 157) 00:11:27.615 1.962 - 1.976: 99.1505% ( 45) 00:11:27.615 1.976 - 1.990: 99.2300% ( 13) 00:11:27.615 1.990 - 2.003: 99.2422% ( 2) 00:11:27.615 2.003 - 2.017: 99.2605% ( 3) 00:11:27.615 2.017 - 2.031: 99.2789% ( 3) 00:11:27.615 2.031 - 2.045: 99.2850% ( 1) 00:11:27.615 2.045 - 2.059: 99.3033% ( 3) 00:11:27.615 2.059 - 2.073: 99.3216% ( 3) 00:11:27.615 2.073 - 2.087: 99.3461% ( 4) 00:11:27.615 2.087 - 2.101: 99.3522% ( 1) 00:11:27.615 2.101 - 2.115: 99.3583% ( 1) 00:11:27.615 2.115 - 2.129: 99.3705% ( 2) 00:11:27.615 2.170 - 2.184: 99.3766% ( 1) 00:11:27.615 2.184 - 2.198: 99.3828% ( 1) 00:11:27.615 2.254 - 2.268: 99.3889% ( 1) 00:11:27.615 2.379 - 2.393: 99.4011% ( 2) 00:11:27.615 2.393 - 2.407: 99.4072% ( 1) 00:11:27.615 3.562 - 3.590: 99.4133% ( 1) 00:11:27.615 3.590 - 3.617: 99.4194% ( 1) 00:11:27.615 3.617 - 3.645: 99.4255% ( 1) 00:11:27.615 3.645 - 3.673: 99.4316% ( 1) 00:11:27.615 4.035 - 4.063: 99.4378% ( 1) 00:11:27.615 4.230 - 4.257: 99.4439% ( 1) 00:11:27.615 4.341 - 4.369: 99.4500% ( 1) 00:11:27.615 4.369 - 4.397: 99.4561% ( 1) 00:11:27.615 4.397 - 4.424: 99.4683% ( 2) 00:11:27.615 4.563 - 4.591: 99.4744% ( 1) 00:11:27.615 4.786 - 4.814: 99.4805% ( 1) 00:11:27.615 5.037 - 5.064: 99.4866% ( 1) 00:11:27.615 5.537 - 5.565: 99.4928% ( 1) 00:11:27.615 5.649 - 5.677: 99.5050% ( 2) 00:11:27.615 5.871 - 5.899: 99.5111% ( 1) 00:11:27.615 6.177 - 6.205: 99.5172% ( 1) 00:11:27.615 6.372 - 6.400: 99.5233% ( 1) 00:11:27.615 6.790 - 6.817: 99.5294% ( 1) 00:11:27.615 12.188 - 12.243: 99.5355% ( 1) 00:11:27.615 165.621 - 166.511: 99.5416% ( 1) 00:11:27.615 3989.148 - 4017.642: 100.0000% ( 75) 00:11:27.615 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:27.615 [ 00:11:27.615 { 00:11:27.615 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:27.615 "subtype": "Discovery", 00:11:27.615 "listen_addresses": [], 00:11:27.615 "allow_any_host": true, 00:11:27.615 "hosts": [] 00:11:27.615 }, 00:11:27.615 { 00:11:27.615 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:27.615 "subtype": "NVMe", 00:11:27.615 "listen_addresses": [ 00:11:27.615 { 00:11:27.615 "trtype": "VFIOUSER", 00:11:27.615 "adrfam": "IPv4", 00:11:27.615 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:27.615 "trsvcid": "0" 00:11:27.615 } 00:11:27.615 ], 00:11:27.615 "allow_any_host": true, 00:11:27.615 "hosts": [], 00:11:27.615 "serial_number": "SPDK1", 00:11:27.615 "model_number": "SPDK bdev Controller", 00:11:27.615 "max_namespaces": 32, 00:11:27.615 "min_cntlid": 1, 00:11:27.615 "max_cntlid": 65519, 00:11:27.615 "namespaces": [ 00:11:27.615 { 00:11:27.615 "nsid": 1, 00:11:27.615 "bdev_name": "Malloc1", 00:11:27.615 "name": "Malloc1", 00:11:27.615 "nguid": "60DAAA6A2D2C4992B7AAB62D8023977B", 00:11:27.615 "uuid": "60daaa6a-2d2c-4992-b7aa-b62d8023977b" 00:11:27.615 } 00:11:27.615 ] 00:11:27.615 }, 00:11:27.615 { 00:11:27.615 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:27.615 "subtype": "NVMe", 00:11:27.615 "listen_addresses": [ 00:11:27.615 { 00:11:27.615 "trtype": "VFIOUSER", 00:11:27.615 "adrfam": "IPv4", 00:11:27.615 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:27.615 "trsvcid": "0" 00:11:27.615 } 00:11:27.615 ], 00:11:27.615 "allow_any_host": true, 00:11:27.615 "hosts": [], 00:11:27.615 "serial_number": "SPDK2", 00:11:27.615 "model_number": "SPDK bdev Controller", 00:11:27.615 "max_namespaces": 32, 00:11:27.615 "min_cntlid": 1, 00:11:27.615 "max_cntlid": 65519, 00:11:27.615 "namespaces": [ 00:11:27.615 { 00:11:27.615 "nsid": 1, 00:11:27.615 "bdev_name": "Malloc2", 00:11:27.615 "name": "Malloc2", 00:11:27.615 "nguid": "A61FA4CE46E44446A343D1C22B92EB02", 00:11:27.615 "uuid": "a61fa4ce-46e4-4446-a343-d1c22b92eb02" 00:11:27.615 } 00:11:27.615 ] 00:11:27.615 } 00:11:27.615 ] 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4122181 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:27.615 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:27.874 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.874 [2024-07-15 22:27:51.694679] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:27.874 Malloc3 00:11:27.874 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:28.134 [2024-07-15 22:27:51.915410] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:28.134 22:27:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:28.134 Asynchronous Event Request test 00:11:28.134 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:28.134 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:28.134 Registering asynchronous event callbacks... 00:11:28.134 Starting namespace attribute notice tests for all controllers... 00:11:28.134 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:28.134 aer_cb - Changed Namespace 00:11:28.134 Cleaning up... 00:11:28.134 [ 00:11:28.134 { 00:11:28.134 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:28.134 "subtype": "Discovery", 00:11:28.134 "listen_addresses": [], 00:11:28.134 "allow_any_host": true, 00:11:28.134 "hosts": [] 00:11:28.134 }, 00:11:28.134 { 00:11:28.134 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:28.134 "subtype": "NVMe", 00:11:28.134 "listen_addresses": [ 00:11:28.134 { 00:11:28.134 "trtype": "VFIOUSER", 00:11:28.134 "adrfam": "IPv4", 00:11:28.134 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:28.134 "trsvcid": "0" 00:11:28.134 } 00:11:28.134 ], 00:11:28.134 "allow_any_host": true, 00:11:28.134 "hosts": [], 00:11:28.134 "serial_number": "SPDK1", 00:11:28.134 "model_number": "SPDK bdev Controller", 00:11:28.134 "max_namespaces": 32, 00:11:28.134 "min_cntlid": 1, 00:11:28.134 "max_cntlid": 65519, 00:11:28.134 "namespaces": [ 00:11:28.134 { 00:11:28.134 "nsid": 1, 00:11:28.134 "bdev_name": "Malloc1", 00:11:28.134 "name": "Malloc1", 00:11:28.134 "nguid": "60DAAA6A2D2C4992B7AAB62D8023977B", 00:11:28.134 "uuid": "60daaa6a-2d2c-4992-b7aa-b62d8023977b" 00:11:28.134 }, 00:11:28.134 { 00:11:28.134 "nsid": 2, 00:11:28.134 "bdev_name": "Malloc3", 00:11:28.134 "name": "Malloc3", 00:11:28.134 "nguid": "C2ECE10E07474BE6BDB29DB5F5EA94C1", 00:11:28.134 "uuid": "c2ece10e-0747-4be6-bdb2-9db5f5ea94c1" 00:11:28.134 } 00:11:28.134 ] 00:11:28.134 }, 00:11:28.134 { 00:11:28.134 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:28.134 "subtype": "NVMe", 00:11:28.134 "listen_addresses": [ 00:11:28.134 { 00:11:28.134 "trtype": "VFIOUSER", 00:11:28.134 "adrfam": "IPv4", 00:11:28.134 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:28.134 "trsvcid": "0" 00:11:28.134 } 00:11:28.134 ], 00:11:28.134 "allow_any_host": true, 00:11:28.134 "hosts": [], 00:11:28.134 "serial_number": "SPDK2", 00:11:28.134 "model_number": "SPDK bdev Controller", 00:11:28.134 "max_namespaces": 32, 00:11:28.134 "min_cntlid": 1, 00:11:28.134 "max_cntlid": 65519, 00:11:28.134 "namespaces": [ 00:11:28.134 { 00:11:28.134 "nsid": 1, 00:11:28.134 "bdev_name": "Malloc2", 00:11:28.134 "name": "Malloc2", 00:11:28.134 "nguid": "A61FA4CE46E44446A343D1C22B92EB02", 00:11:28.134 "uuid": "a61fa4ce-46e4-4446-a343-d1c22b92eb02" 00:11:28.134 } 00:11:28.134 ] 00:11:28.134 } 00:11:28.134 ] 00:11:28.395 22:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4122181 00:11:28.395 22:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:28.395 22:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:28.395 22:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:28.396 22:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:28.396 [2024-07-15 22:27:52.148881] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:11:28.396 [2024-07-15 22:27:52.148913] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4122196 ] 00:11:28.396 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.396 [2024-07-15 22:27:52.176634] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:28.396 [2024-07-15 22:27:52.180093] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:28.396 [2024-07-15 22:27:52.180114] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f11edc12000 00:11:28.396 [2024-07-15 22:27:52.181093] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:28.396 [2024-07-15 22:27:52.182100] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:28.396 [2024-07-15 22:27:52.183110] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:28.396 [2024-07-15 22:27:52.184114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:28.396 [2024-07-15 22:27:52.185122] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:28.396 [2024-07-15 22:27:52.186130] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:28.396 [2024-07-15 22:27:52.187144] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:28.396 [2024-07-15 22:27:52.188147] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:28.396 [2024-07-15 22:27:52.189161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:28.396 [2024-07-15 22:27:52.189171] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f11edc07000 00:11:28.396 [2024-07-15 22:27:52.190110] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:28.396 [2024-07-15 22:27:52.201629] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:28.396 [2024-07-15 22:27:52.201657] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:28.396 [2024-07-15 22:27:52.206743] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:28.396 [2024-07-15 22:27:52.206779] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:28.396 [2024-07-15 22:27:52.206849] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:28.396 [2024-07-15 22:27:52.206865] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:28.396 [2024-07-15 22:27:52.206870] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:28.396 [2024-07-15 22:27:52.207750] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:28.396 [2024-07-15 22:27:52.207760] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:28.396 [2024-07-15 22:27:52.207767] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:28.396 [2024-07-15 22:27:52.208761] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:28.396 [2024-07-15 22:27:52.208770] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:28.396 [2024-07-15 22:27:52.208776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:28.396 [2024-07-15 22:27:52.209764] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:28.396 [2024-07-15 22:27:52.209773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:28.396 [2024-07-15 22:27:52.210772] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:28.396 [2024-07-15 22:27:52.210780] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:28.396 [2024-07-15 22:27:52.210785] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:28.396 [2024-07-15 22:27:52.210791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:28.396 [2024-07-15 22:27:52.210895] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:28.396 [2024-07-15 22:27:52.210900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:28.396 [2024-07-15 22:27:52.210904] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:28.396 [2024-07-15 22:27:52.211781] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:28.396 [2024-07-15 22:27:52.212781] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:28.396 [2024-07-15 22:27:52.213796] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:28.396 [2024-07-15 22:27:52.214796] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:28.396 [2024-07-15 22:27:52.214832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:28.396 [2024-07-15 22:27:52.215810] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:28.396 [2024-07-15 22:27:52.215818] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:28.396 [2024-07-15 22:27:52.215822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:28.396 [2024-07-15 22:27:52.215839] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:28.396 [2024-07-15 22:27:52.215845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:28.396 [2024-07-15 22:27:52.215856] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:28.396 [2024-07-15 22:27:52.215861] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:28.396 [2024-07-15 22:27:52.215872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:28.396 [2024-07-15 22:27:52.223232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:28.396 [2024-07-15 22:27:52.223242] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:28.396 [2024-07-15 22:27:52.223251] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:28.396 [2024-07-15 22:27:52.223255] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:28.396 [2024-07-15 22:27:52.223259] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:28.396 [2024-07-15 22:27:52.223264] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:28.396 [2024-07-15 22:27:52.223268] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:28.396 [2024-07-15 22:27:52.223272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:28.396 [2024-07-15 22:27:52.223279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:28.396 [2024-07-15 22:27:52.223288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:28.396 [2024-07-15 22:27:52.231231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:28.396 [2024-07-15 22:27:52.231245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.396 [2024-07-15 22:27:52.231252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.396 [2024-07-15 22:27:52.231260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.396 [2024-07-15 22:27:52.231267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:28.396 [2024-07-15 22:27:52.231271] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:28.396 [2024-07-15 22:27:52.231279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:28.396 [2024-07-15 22:27:52.231288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:28.396 [2024-07-15 22:27:52.239229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:28.396 [2024-07-15 22:27:52.239237] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:28.396 [2024-07-15 22:27:52.239241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:28.396 [2024-07-15 22:27:52.239247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:28.396 [2024-07-15 22:27:52.239252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:28.396 [2024-07-15 22:27:52.239260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:28.396 [2024-07-15 22:27:52.247228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:28.396 [2024-07-15 22:27:52.247279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:28.396 [2024-07-15 22:27:52.247286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:28.396 [2024-07-15 22:27:52.247296] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:28.396 [2024-07-15 22:27:52.247300] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:28.396 [2024-07-15 22:27:52.247306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:28.396 [2024-07-15 22:27:52.255230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:28.396 [2024-07-15 22:27:52.255240] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:28.397 [2024-07-15 22:27:52.255252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:28.397 [2024-07-15 22:27:52.255258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:28.397 [2024-07-15 22:27:52.255264] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:28.397 [2024-07-15 22:27:52.255268] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:28.397 [2024-07-15 22:27:52.255274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:28.397 [2024-07-15 22:27:52.263233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:28.397 [2024-07-15 22:27:52.263246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:28.397 [2024-07-15 22:27:52.263253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:28.397 [2024-07-15 22:27:52.263260] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:28.397 [2024-07-15 22:27:52.263263] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:28.397 [2024-07-15 22:27:52.263269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:28.397 [2024-07-15 22:27:52.271230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:28.397 [2024-07-15 22:27:52.271239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:28.397 [2024-07-15 22:27:52.271245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:28.397 [2024-07-15 22:27:52.271253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:28.397 [2024-07-15 22:27:52.271259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:28.397 [2024-07-15 22:27:52.271263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:28.397 [2024-07-15 22:27:52.271268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:28.397 [2024-07-15 22:27:52.271272] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:28.397 [2024-07-15 22:27:52.271276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:28.397 [2024-07-15 22:27:52.271284] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:28.397 [2024-07-15 22:27:52.271299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:28.397 [2024-07-15 22:27:52.279230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:28.397 [2024-07-15 22:27:52.279243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:28.397 [2024-07-15 22:27:52.287230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:28.397 [2024-07-15 22:27:52.287242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:28.397 [2024-07-15 22:27:52.295229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:28.397 [2024-07-15 22:27:52.295241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:28.397 [2024-07-15 22:27:52.303232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:28.397 [2024-07-15 22:27:52.303246] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:28.397 [2024-07-15 22:27:52.303251] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:28.397 [2024-07-15 22:27:52.303254] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:28.397 [2024-07-15 22:27:52.303257] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:28.397 [2024-07-15 22:27:52.303263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:28.397 [2024-07-15 22:27:52.303270] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:28.397 [2024-07-15 22:27:52.303274] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:28.397 [2024-07-15 22:27:52.303279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:28.397 [2024-07-15 22:27:52.303285] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:28.397 [2024-07-15 22:27:52.303289] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:28.397 [2024-07-15 22:27:52.303294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:28.397 [2024-07-15 22:27:52.303301] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:28.397 [2024-07-15 22:27:52.303305] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:28.397 [2024-07-15 22:27:52.303310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:28.397 [2024-07-15 22:27:52.311231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:28.397 [2024-07-15 22:27:52.311247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:28.397 [2024-07-15 22:27:52.311256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:28.397 [2024-07-15 22:27:52.311262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:28.397 ===================================================== 00:11:28.397 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:28.397 ===================================================== 00:11:28.397 Controller Capabilities/Features 00:11:28.397 ================================ 00:11:28.397 Vendor ID: 4e58 00:11:28.397 Subsystem Vendor ID: 4e58 00:11:28.397 Serial Number: SPDK2 00:11:28.397 Model Number: SPDK bdev Controller 00:11:28.397 Firmware Version: 24.09 00:11:28.397 Recommended Arb Burst: 6 00:11:28.397 IEEE OUI Identifier: 8d 6b 50 00:11:28.397 Multi-path I/O 00:11:28.397 May have multiple subsystem ports: Yes 00:11:28.397 May have multiple controllers: Yes 00:11:28.397 Associated with SR-IOV VF: No 00:11:28.397 Max Data Transfer Size: 131072 00:11:28.397 Max Number of Namespaces: 32 00:11:28.397 Max Number of I/O Queues: 127 00:11:28.397 NVMe Specification Version (VS): 1.3 00:11:28.397 NVMe Specification Version (Identify): 1.3 00:11:28.397 Maximum Queue Entries: 256 00:11:28.397 Contiguous Queues Required: Yes 00:11:28.397 Arbitration Mechanisms Supported 00:11:28.397 Weighted Round Robin: Not Supported 00:11:28.397 Vendor Specific: Not Supported 00:11:28.397 Reset Timeout: 15000 ms 00:11:28.397 Doorbell Stride: 4 bytes 00:11:28.397 NVM Subsystem Reset: Not Supported 00:11:28.397 Command Sets Supported 00:11:28.397 NVM Command Set: Supported 00:11:28.397 Boot Partition: Not Supported 00:11:28.397 Memory Page Size Minimum: 4096 bytes 00:11:28.397 Memory Page Size Maximum: 4096 bytes 00:11:28.397 Persistent Memory Region: Not Supported 00:11:28.397 Optional Asynchronous Events Supported 00:11:28.397 Namespace Attribute Notices: Supported 00:11:28.397 Firmware Activation Notices: Not Supported 00:11:28.397 ANA Change Notices: Not Supported 00:11:28.397 PLE Aggregate Log Change Notices: Not Supported 00:11:28.397 LBA Status Info Alert Notices: Not Supported 00:11:28.397 EGE Aggregate Log Change Notices: Not Supported 00:11:28.397 Normal NVM Subsystem Shutdown event: Not Supported 00:11:28.397 Zone Descriptor Change Notices: Not Supported 00:11:28.397 Discovery Log Change Notices: Not Supported 00:11:28.397 Controller Attributes 00:11:28.397 128-bit Host Identifier: Supported 00:11:28.397 Non-Operational Permissive Mode: Not Supported 00:11:28.397 NVM Sets: Not Supported 00:11:28.397 Read Recovery Levels: Not Supported 00:11:28.397 Endurance Groups: Not Supported 00:11:28.397 Predictable Latency Mode: Not Supported 00:11:28.397 Traffic Based Keep ALive: Not Supported 00:11:28.397 Namespace Granularity: Not Supported 00:11:28.397 SQ Associations: Not Supported 00:11:28.397 UUID List: Not Supported 00:11:28.397 Multi-Domain Subsystem: Not Supported 00:11:28.397 Fixed Capacity Management: Not Supported 00:11:28.397 Variable Capacity Management: Not Supported 00:11:28.397 Delete Endurance Group: Not Supported 00:11:28.397 Delete NVM Set: Not Supported 00:11:28.397 Extended LBA Formats Supported: Not Supported 00:11:28.397 Flexible Data Placement Supported: Not Supported 00:11:28.397 00:11:28.397 Controller Memory Buffer Support 00:11:28.397 ================================ 00:11:28.397 Supported: No 00:11:28.397 00:11:28.397 Persistent Memory Region Support 00:11:28.397 ================================ 00:11:28.397 Supported: No 00:11:28.397 00:11:28.397 Admin Command Set Attributes 00:11:28.397 ============================ 00:11:28.397 Security Send/Receive: Not Supported 00:11:28.397 Format NVM: Not Supported 00:11:28.397 Firmware Activate/Download: Not Supported 00:11:28.397 Namespace Management: Not Supported 00:11:28.397 Device Self-Test: Not Supported 00:11:28.397 Directives: Not Supported 00:11:28.397 NVMe-MI: Not Supported 00:11:28.397 Virtualization Management: Not Supported 00:11:28.397 Doorbell Buffer Config: Not Supported 00:11:28.397 Get LBA Status Capability: Not Supported 00:11:28.397 Command & Feature Lockdown Capability: Not Supported 00:11:28.397 Abort Command Limit: 4 00:11:28.397 Async Event Request Limit: 4 00:11:28.397 Number of Firmware Slots: N/A 00:11:28.397 Firmware Slot 1 Read-Only: N/A 00:11:28.397 Firmware Activation Without Reset: N/A 00:11:28.397 Multiple Update Detection Support: N/A 00:11:28.397 Firmware Update Granularity: No Information Provided 00:11:28.397 Per-Namespace SMART Log: No 00:11:28.397 Asymmetric Namespace Access Log Page: Not Supported 00:11:28.397 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:28.398 Command Effects Log Page: Supported 00:11:28.398 Get Log Page Extended Data: Supported 00:11:28.398 Telemetry Log Pages: Not Supported 00:11:28.398 Persistent Event Log Pages: Not Supported 00:11:28.398 Supported Log Pages Log Page: May Support 00:11:28.398 Commands Supported & Effects Log Page: Not Supported 00:11:28.398 Feature Identifiers & Effects Log Page:May Support 00:11:28.398 NVMe-MI Commands & Effects Log Page: May Support 00:11:28.398 Data Area 4 for Telemetry Log: Not Supported 00:11:28.398 Error Log Page Entries Supported: 128 00:11:28.398 Keep Alive: Supported 00:11:28.398 Keep Alive Granularity: 10000 ms 00:11:28.398 00:11:28.398 NVM Command Set Attributes 00:11:28.398 ========================== 00:11:28.398 Submission Queue Entry Size 00:11:28.398 Max: 64 00:11:28.398 Min: 64 00:11:28.398 Completion Queue Entry Size 00:11:28.398 Max: 16 00:11:28.398 Min: 16 00:11:28.398 Number of Namespaces: 32 00:11:28.398 Compare Command: Supported 00:11:28.398 Write Uncorrectable Command: Not Supported 00:11:28.398 Dataset Management Command: Supported 00:11:28.398 Write Zeroes Command: Supported 00:11:28.398 Set Features Save Field: Not Supported 00:11:28.398 Reservations: Not Supported 00:11:28.398 Timestamp: Not Supported 00:11:28.398 Copy: Supported 00:11:28.398 Volatile Write Cache: Present 00:11:28.398 Atomic Write Unit (Normal): 1 00:11:28.398 Atomic Write Unit (PFail): 1 00:11:28.398 Atomic Compare & Write Unit: 1 00:11:28.398 Fused Compare & Write: Supported 00:11:28.398 Scatter-Gather List 00:11:28.398 SGL Command Set: Supported (Dword aligned) 00:11:28.398 SGL Keyed: Not Supported 00:11:28.398 SGL Bit Bucket Descriptor: Not Supported 00:11:28.398 SGL Metadata Pointer: Not Supported 00:11:28.398 Oversized SGL: Not Supported 00:11:28.398 SGL Metadata Address: Not Supported 00:11:28.398 SGL Offset: Not Supported 00:11:28.398 Transport SGL Data Block: Not Supported 00:11:28.398 Replay Protected Memory Block: Not Supported 00:11:28.398 00:11:28.398 Firmware Slot Information 00:11:28.398 ========================= 00:11:28.398 Active slot: 1 00:11:28.398 Slot 1 Firmware Revision: 24.09 00:11:28.398 00:11:28.398 00:11:28.398 Commands Supported and Effects 00:11:28.398 ============================== 00:11:28.398 Admin Commands 00:11:28.398 -------------- 00:11:28.398 Get Log Page (02h): Supported 00:11:28.398 Identify (06h): Supported 00:11:28.398 Abort (08h): Supported 00:11:28.398 Set Features (09h): Supported 00:11:28.398 Get Features (0Ah): Supported 00:11:28.398 Asynchronous Event Request (0Ch): Supported 00:11:28.398 Keep Alive (18h): Supported 00:11:28.398 I/O Commands 00:11:28.398 ------------ 00:11:28.398 Flush (00h): Supported LBA-Change 00:11:28.398 Write (01h): Supported LBA-Change 00:11:28.398 Read (02h): Supported 00:11:28.398 Compare (05h): Supported 00:11:28.398 Write Zeroes (08h): Supported LBA-Change 00:11:28.398 Dataset Management (09h): Supported LBA-Change 00:11:28.398 Copy (19h): Supported LBA-Change 00:11:28.398 00:11:28.398 Error Log 00:11:28.398 ========= 00:11:28.398 00:11:28.398 Arbitration 00:11:28.398 =========== 00:11:28.398 Arbitration Burst: 1 00:11:28.398 00:11:28.398 Power Management 00:11:28.398 ================ 00:11:28.398 Number of Power States: 1 00:11:28.398 Current Power State: Power State #0 00:11:28.398 Power State #0: 00:11:28.398 Max Power: 0.00 W 00:11:28.398 Non-Operational State: Operational 00:11:28.398 Entry Latency: Not Reported 00:11:28.398 Exit Latency: Not Reported 00:11:28.398 Relative Read Throughput: 0 00:11:28.398 Relative Read Latency: 0 00:11:28.398 Relative Write Throughput: 0 00:11:28.398 Relative Write Latency: 0 00:11:28.398 Idle Power: Not Reported 00:11:28.398 Active Power: Not Reported 00:11:28.398 Non-Operational Permissive Mode: Not Supported 00:11:28.398 00:11:28.398 Health Information 00:11:28.398 ================== 00:11:28.398 Critical Warnings: 00:11:28.398 Available Spare Space: OK 00:11:28.398 Temperature: OK 00:11:28.398 Device Reliability: OK 00:11:28.398 Read Only: No 00:11:28.398 Volatile Memory Backup: OK 00:11:28.398 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:28.398 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:28.398 Available Spare: 0% 00:11:28.398 Available Sp[2024-07-15 22:27:52.311352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:28.398 [2024-07-15 22:27:52.319231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:28.398 [2024-07-15 22:27:52.319261] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:28.398 [2024-07-15 22:27:52.319269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.398 [2024-07-15 22:27:52.319275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.398 [2024-07-15 22:27:52.319281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.398 [2024-07-15 22:27:52.319286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:28.398 [2024-07-15 22:27:52.319329] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:28.398 [2024-07-15 22:27:52.319338] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:28.398 [2024-07-15 22:27:52.320331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:28.398 [2024-07-15 22:27:52.320373] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:28.398 [2024-07-15 22:27:52.320379] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:28.398 [2024-07-15 22:27:52.321339] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:28.398 [2024-07-15 22:27:52.321350] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:28.398 [2024-07-15 22:27:52.321395] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:28.398 [2024-07-15 22:27:52.322375] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:28.398 are Threshold: 0% 00:11:28.398 Life Percentage Used: 0% 00:11:28.398 Data Units Read: 0 00:11:28.398 Data Units Written: 0 00:11:28.398 Host Read Commands: 0 00:11:28.398 Host Write Commands: 0 00:11:28.398 Controller Busy Time: 0 minutes 00:11:28.398 Power Cycles: 0 00:11:28.398 Power On Hours: 0 hours 00:11:28.398 Unsafe Shutdowns: 0 00:11:28.398 Unrecoverable Media Errors: 0 00:11:28.398 Lifetime Error Log Entries: 0 00:11:28.398 Warning Temperature Time: 0 minutes 00:11:28.398 Critical Temperature Time: 0 minutes 00:11:28.398 00:11:28.398 Number of Queues 00:11:28.398 ================ 00:11:28.398 Number of I/O Submission Queues: 127 00:11:28.398 Number of I/O Completion Queues: 127 00:11:28.398 00:11:28.398 Active Namespaces 00:11:28.398 ================= 00:11:28.398 Namespace ID:1 00:11:28.398 Error Recovery Timeout: Unlimited 00:11:28.398 Command Set Identifier: NVM (00h) 00:11:28.398 Deallocate: Supported 00:11:28.398 Deallocated/Unwritten Error: Not Supported 00:11:28.398 Deallocated Read Value: Unknown 00:11:28.398 Deallocate in Write Zeroes: Not Supported 00:11:28.398 Deallocated Guard Field: 0xFFFF 00:11:28.398 Flush: Supported 00:11:28.398 Reservation: Supported 00:11:28.398 Namespace Sharing Capabilities: Multiple Controllers 00:11:28.398 Size (in LBAs): 131072 (0GiB) 00:11:28.398 Capacity (in LBAs): 131072 (0GiB) 00:11:28.398 Utilization (in LBAs): 131072 (0GiB) 00:11:28.398 NGUID: A61FA4CE46E44446A343D1C22B92EB02 00:11:28.398 UUID: a61fa4ce-46e4-4446-a343-d1c22b92eb02 00:11:28.398 Thin Provisioning: Not Supported 00:11:28.398 Per-NS Atomic Units: Yes 00:11:28.398 Atomic Boundary Size (Normal): 0 00:11:28.398 Atomic Boundary Size (PFail): 0 00:11:28.398 Atomic Boundary Offset: 0 00:11:28.398 Maximum Single Source Range Length: 65535 00:11:28.398 Maximum Copy Length: 65535 00:11:28.398 Maximum Source Range Count: 1 00:11:28.398 NGUID/EUI64 Never Reused: No 00:11:28.398 Namespace Write Protected: No 00:11:28.398 Number of LBA Formats: 1 00:11:28.398 Current LBA Format: LBA Format #00 00:11:28.398 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:28.398 00:11:28.398 22:27:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:28.658 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.658 [2024-07-15 22:27:52.538574] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:33.931 Initializing NVMe Controllers 00:11:33.931 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:33.931 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:33.931 Initialization complete. Launching workers. 00:11:33.931 ======================================================== 00:11:33.931 Latency(us) 00:11:33.931 Device Information : IOPS MiB/s Average min max 00:11:33.931 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39951.28 156.06 3203.73 960.94 7600.65 00:11:33.931 ======================================================== 00:11:33.931 Total : 39951.28 156.06 3203.73 960.94 7600.65 00:11:33.931 00:11:33.931 [2024-07-15 22:27:57.644472] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:33.931 22:27:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:33.931 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.931 [2024-07-15 22:27:57.859087] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:39.225 Initializing NVMe Controllers 00:11:39.225 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:39.225 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:39.225 Initialization complete. Launching workers. 00:11:39.225 ======================================================== 00:11:39.225 Latency(us) 00:11:39.226 Device Information : IOPS MiB/s Average min max 00:11:39.226 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39916.40 155.92 3206.50 963.59 7467.48 00:11:39.226 ======================================================== 00:11:39.226 Total : 39916.40 155.92 3206.50 963.59 7467.48 00:11:39.226 00:11:39.226 [2024-07-15 22:28:02.881897] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:39.226 22:28:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:39.226 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.226 [2024-07-15 22:28:03.067300] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:44.497 [2024-07-15 22:28:08.213320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:44.497 Initializing NVMe Controllers 00:11:44.497 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:44.497 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:44.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:44.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:44.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:44.497 Initialization complete. Launching workers. 00:11:44.497 Starting thread on core 2 00:11:44.497 Starting thread on core 3 00:11:44.497 Starting thread on core 1 00:11:44.497 22:28:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:44.497 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.755 [2024-07-15 22:28:08.491674] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:48.043 [2024-07-15 22:28:11.551435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:48.043 Initializing NVMe Controllers 00:11:48.043 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:48.043 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:48.043 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:48.043 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:48.043 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:48.043 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:48.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:48.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:48.043 Initialization complete. Launching workers. 00:11:48.043 Starting thread on core 1 with urgent priority queue 00:11:48.043 Starting thread on core 2 with urgent priority queue 00:11:48.043 Starting thread on core 3 with urgent priority queue 00:11:48.043 Starting thread on core 0 with urgent priority queue 00:11:48.043 SPDK bdev Controller (SPDK2 ) core 0: 8163.67 IO/s 12.25 secs/100000 ios 00:11:48.043 SPDK bdev Controller (SPDK2 ) core 1: 9639.00 IO/s 10.37 secs/100000 ios 00:11:48.043 SPDK bdev Controller (SPDK2 ) core 2: 9906.00 IO/s 10.09 secs/100000 ios 00:11:48.043 SPDK bdev Controller (SPDK2 ) core 3: 8687.00 IO/s 11.51 secs/100000 ios 00:11:48.043 ======================================================== 00:11:48.043 00:11:48.043 22:28:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:48.043 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.043 [2024-07-15 22:28:11.820500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:48.043 Initializing NVMe Controllers 00:11:48.043 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:48.043 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:48.043 Namespace ID: 1 size: 0GB 00:11:48.043 Initialization complete. 00:11:48.043 INFO: using host memory buffer for IO 00:11:48.043 Hello world! 00:11:48.043 [2024-07-15 22:28:11.833581] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:48.043 22:28:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:48.043 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.302 [2024-07-15 22:28:12.100155] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:49.240 Initializing NVMe Controllers 00:11:49.240 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:49.240 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:49.240 Initialization complete. Launching workers. 00:11:49.240 submit (in ns) avg, min, max = 5697.8, 3277.4, 4001251.3 00:11:49.240 complete (in ns) avg, min, max = 18587.9, 1809.6, 4002342.6 00:11:49.240 00:11:49.240 Submit histogram 00:11:49.240 ================ 00:11:49.240 Range in us Cumulative Count 00:11:49.240 3.270 - 3.283: 0.0246% ( 4) 00:11:49.240 3.283 - 3.297: 0.0800% ( 9) 00:11:49.240 3.297 - 3.311: 0.1968% ( 19) 00:11:49.240 3.311 - 3.325: 0.8180% ( 101) 00:11:49.240 3.325 - 3.339: 3.3210% ( 407) 00:11:49.240 3.339 - 3.353: 8.2226% ( 797) 00:11:49.240 3.353 - 3.367: 14.0037% ( 940) 00:11:49.240 3.367 - 3.381: 20.1353% ( 997) 00:11:49.240 3.381 - 3.395: 26.3592% ( 1012) 00:11:49.240 3.395 - 3.409: 31.9004% ( 901) 00:11:49.240 3.409 - 3.423: 36.8020% ( 797) 00:11:49.240 3.423 - 3.437: 42.5646% ( 937) 00:11:49.240 3.437 - 3.450: 47.3985% ( 786) 00:11:49.240 3.450 - 3.464: 51.0271% ( 590) 00:11:49.240 3.464 - 3.478: 55.1784% ( 675) 00:11:49.240 3.478 - 3.492: 62.3616% ( 1168) 00:11:49.240 3.492 - 3.506: 68.4932% ( 997) 00:11:49.240 3.506 - 3.520: 72.1341% ( 592) 00:11:49.240 3.520 - 3.534: 77.2448% ( 831) 00:11:49.240 3.534 - 3.548: 82.1095% ( 791) 00:11:49.240 3.548 - 3.562: 84.6617% ( 415) 00:11:49.240 3.562 - 3.590: 86.8819% ( 361) 00:11:49.240 3.590 - 3.617: 87.6076% ( 118) 00:11:49.240 3.617 - 3.645: 88.6531% ( 170) 00:11:49.240 3.645 - 3.673: 90.4059% ( 285) 00:11:49.240 3.673 - 3.701: 92.2079% ( 293) 00:11:49.240 3.701 - 3.729: 93.6900% ( 241) 00:11:49.240 3.729 - 3.757: 95.5412% ( 301) 00:11:49.240 3.757 - 3.784: 97.0787% ( 250) 00:11:49.240 3.784 - 3.812: 98.1119% ( 168) 00:11:49.240 3.812 - 3.840: 98.7392% ( 102) 00:11:49.240 3.840 - 3.868: 99.2251% ( 79) 00:11:49.240 3.868 - 3.896: 99.4895% ( 43) 00:11:49.240 3.896 - 3.923: 99.5818% ( 15) 00:11:49.240 3.923 - 3.951: 99.5941% ( 2) 00:11:49.240 3.951 - 3.979: 99.6125% ( 3) 00:11:49.240 3.979 - 4.007: 99.6187% ( 1) 00:11:49.240 4.118 - 4.146: 99.6248% ( 1) 00:11:49.241 4.786 - 4.814: 99.6310% ( 1) 00:11:49.241 5.064 - 5.092: 99.6371% ( 1) 00:11:49.241 5.120 - 5.148: 99.6556% ( 3) 00:11:49.241 5.176 - 5.203: 99.6617% ( 1) 00:11:49.241 5.203 - 5.231: 99.6679% ( 1) 00:11:49.241 5.231 - 5.259: 99.6740% ( 1) 00:11:49.241 5.315 - 5.343: 99.6802% ( 1) 00:11:49.241 5.343 - 5.370: 99.6863% ( 1) 00:11:49.241 5.370 - 5.398: 99.6925% ( 1) 00:11:49.241 5.398 - 5.426: 99.7048% ( 2) 00:11:49.241 5.454 - 5.482: 99.7109% ( 1) 00:11:49.241 5.537 - 5.565: 99.7232% ( 2) 00:11:49.241 5.593 - 5.621: 99.7294% ( 1) 00:11:49.241 5.704 - 5.732: 99.7355% ( 1) 00:11:49.241 5.732 - 5.760: 99.7417% ( 1) 00:11:49.241 5.760 - 5.788: 99.7601% ( 3) 00:11:49.241 5.788 - 5.816: 99.7663% ( 1) 00:11:49.241 5.816 - 5.843: 99.7724% ( 1) 00:11:49.241 5.871 - 5.899: 99.7786% ( 1) 00:11:49.241 5.927 - 5.955: 99.7909% ( 2) 00:11:49.241 5.955 - 5.983: 99.7970% ( 1) 00:11:49.241 5.983 - 6.010: 99.8093% ( 2) 00:11:49.241 6.038 - 6.066: 99.8155% ( 1) 00:11:49.241 6.094 - 6.122: 99.8278% ( 2) 00:11:49.241 6.289 - 6.317: 99.8339% ( 1) 00:11:49.241 6.317 - 6.344: 99.8401% ( 1) 00:11:49.241 6.483 - 6.511: 99.8462% ( 1) 00:11:49.241 6.539 - 6.567: 99.8524% ( 1) 00:11:49.241 6.567 - 6.595: 99.8585% ( 1) 00:11:49.241 6.623 - 6.650: 99.8647% ( 1) 00:11:49.241 6.650 - 6.678: 99.8831% ( 3) 00:11:49.241 6.678 - 6.706: 99.8893% ( 1) 00:11:49.241 6.873 - 6.901: 99.8954% ( 1) 00:11:49.241 6.901 - 6.929: 99.9016% ( 1) 00:11:49.241 6.984 - 7.012: 99.9077% ( 1) 00:11:49.241 7.040 - 7.068: 99.9139% ( 1) 00:11:49.241 7.096 - 7.123: 99.9200% ( 1) 00:11:49.241 7.235 - 7.290: 99.9262% ( 1) 00:11:49.241 7.457 - 7.513: 99.9323% ( 1) 00:11:49.241 7.791 - 7.847: 99.9446% ( 2) 00:11:49.241 3989.148 - 4017.642: 100.0000% ( 9) 00:11:49.241 00:11:49.241 [2024-07-15 22:28:13.195283] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:49.503 Complete histogram 00:11:49.503 ================== 00:11:49.503 Range in us Cumulative Count 00:11:49.503 1.809 - 1.823: 0.2030% ( 33) 00:11:49.503 1.823 - 1.837: 1.5375% ( 217) 00:11:49.503 1.837 - 1.850: 3.3149% ( 289) 00:11:49.503 1.850 - 1.864: 9.7909% ( 1053) 00:11:49.503 1.864 - 1.878: 59.7724% ( 8127) 00:11:49.503 1.878 - 1.892: 90.5289% ( 5001) 00:11:49.503 1.892 - 1.906: 94.4957% ( 645) 00:11:49.503 1.906 - 1.920: 96.0332% ( 250) 00:11:49.503 1.920 - 1.934: 96.5252% ( 80) 00:11:49.503 1.934 - 1.948: 97.4723% ( 154) 00:11:49.503 1.948 - 1.962: 98.6593% ( 193) 00:11:49.503 1.962 - 1.976: 99.2251% ( 92) 00:11:49.503 1.976 - 1.990: 99.3296% ( 17) 00:11:49.503 1.990 - 2.003: 99.3850% ( 9) 00:11:49.503 2.003 - 2.017: 99.4034% ( 3) 00:11:49.503 2.017 - 2.031: 99.4219% ( 3) 00:11:49.504 2.031 - 2.045: 99.4280% ( 1) 00:11:49.504 2.059 - 2.073: 99.4342% ( 1) 00:11:49.504 2.087 - 2.101: 99.4403% ( 1) 00:11:49.504 2.101 - 2.115: 99.4465% ( 1) 00:11:49.504 3.617 - 3.645: 99.4526% ( 1) 00:11:49.504 3.729 - 3.757: 99.4588% ( 1) 00:11:49.504 3.868 - 3.896: 99.4649% ( 1) 00:11:49.504 3.896 - 3.923: 99.4711% ( 1) 00:11:49.504 3.979 - 4.007: 99.4772% ( 1) 00:11:49.504 4.146 - 4.174: 99.4834% ( 1) 00:11:49.504 4.174 - 4.202: 99.4895% ( 1) 00:11:49.504 4.202 - 4.230: 99.4957% ( 1) 00:11:49.504 4.257 - 4.285: 99.5018% ( 1) 00:11:49.504 4.397 - 4.424: 99.5080% ( 1) 00:11:49.504 4.508 - 4.536: 99.5141% ( 1) 00:11:49.504 4.536 - 4.563: 99.5203% ( 1) 00:11:49.504 4.591 - 4.619: 99.5264% ( 1) 00:11:49.504 4.814 - 4.842: 99.5326% ( 1) 00:11:49.504 4.870 - 4.897: 99.5387% ( 1) 00:11:49.504 4.953 - 4.981: 99.5449% ( 1) 00:11:49.504 5.064 - 5.092: 99.5510% ( 1) 00:11:49.504 5.287 - 5.315: 99.5572% ( 1) 00:11:49.504 5.370 - 5.398: 99.5633% ( 1) 00:11:49.504 5.565 - 5.593: 99.5695% ( 1) 00:11:49.504 6.817 - 6.845: 99.5756% ( 1) 00:11:49.504 39.179 - 39.402: 99.5818% ( 1) 00:11:49.504 3989.148 - 4017.642: 100.0000% ( 68) 00:11:49.504 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:49.504 [ 00:11:49.504 { 00:11:49.504 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:49.504 "subtype": "Discovery", 00:11:49.504 "listen_addresses": [], 00:11:49.504 "allow_any_host": true, 00:11:49.504 "hosts": [] 00:11:49.504 }, 00:11:49.504 { 00:11:49.504 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:49.504 "subtype": "NVMe", 00:11:49.504 "listen_addresses": [ 00:11:49.504 { 00:11:49.504 "trtype": "VFIOUSER", 00:11:49.504 "adrfam": "IPv4", 00:11:49.504 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:49.504 "trsvcid": "0" 00:11:49.504 } 00:11:49.504 ], 00:11:49.504 "allow_any_host": true, 00:11:49.504 "hosts": [], 00:11:49.504 "serial_number": "SPDK1", 00:11:49.504 "model_number": "SPDK bdev Controller", 00:11:49.504 "max_namespaces": 32, 00:11:49.504 "min_cntlid": 1, 00:11:49.504 "max_cntlid": 65519, 00:11:49.504 "namespaces": [ 00:11:49.504 { 00:11:49.504 "nsid": 1, 00:11:49.504 "bdev_name": "Malloc1", 00:11:49.504 "name": "Malloc1", 00:11:49.504 "nguid": "60DAAA6A2D2C4992B7AAB62D8023977B", 00:11:49.504 "uuid": "60daaa6a-2d2c-4992-b7aa-b62d8023977b" 00:11:49.504 }, 00:11:49.504 { 00:11:49.504 "nsid": 2, 00:11:49.504 "bdev_name": "Malloc3", 00:11:49.504 "name": "Malloc3", 00:11:49.504 "nguid": "C2ECE10E07474BE6BDB29DB5F5EA94C1", 00:11:49.504 "uuid": "c2ece10e-0747-4be6-bdb2-9db5f5ea94c1" 00:11:49.504 } 00:11:49.504 ] 00:11:49.504 }, 00:11:49.504 { 00:11:49.504 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:49.504 "subtype": "NVMe", 00:11:49.504 "listen_addresses": [ 00:11:49.504 { 00:11:49.504 "trtype": "VFIOUSER", 00:11:49.504 "adrfam": "IPv4", 00:11:49.504 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:49.504 "trsvcid": "0" 00:11:49.504 } 00:11:49.504 ], 00:11:49.504 "allow_any_host": true, 00:11:49.504 "hosts": [], 00:11:49.504 "serial_number": "SPDK2", 00:11:49.504 "model_number": "SPDK bdev Controller", 00:11:49.504 "max_namespaces": 32, 00:11:49.504 "min_cntlid": 1, 00:11:49.504 "max_cntlid": 65519, 00:11:49.504 "namespaces": [ 00:11:49.504 { 00:11:49.504 "nsid": 1, 00:11:49.504 "bdev_name": "Malloc2", 00:11:49.504 "name": "Malloc2", 00:11:49.504 "nguid": "A61FA4CE46E44446A343D1C22B92EB02", 00:11:49.504 "uuid": "a61fa4ce-46e4-4446-a343-d1c22b92eb02" 00:11:49.504 } 00:11:49.504 ] 00:11:49.504 } 00:11:49.504 ] 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4125736 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:49.504 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:49.798 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.798 [2024-07-15 22:28:13.569559] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:49.798 Malloc4 00:11:49.798 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:50.057 [2024-07-15 22:28:13.795229] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:50.057 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:50.057 Asynchronous Event Request test 00:11:50.057 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:50.057 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:50.057 Registering asynchronous event callbacks... 00:11:50.057 Starting namespace attribute notice tests for all controllers... 00:11:50.057 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:50.057 aer_cb - Changed Namespace 00:11:50.057 Cleaning up... 00:11:50.057 [ 00:11:50.057 { 00:11:50.057 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:50.057 "subtype": "Discovery", 00:11:50.057 "listen_addresses": [], 00:11:50.057 "allow_any_host": true, 00:11:50.057 "hosts": [] 00:11:50.057 }, 00:11:50.057 { 00:11:50.057 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:50.057 "subtype": "NVMe", 00:11:50.057 "listen_addresses": [ 00:11:50.057 { 00:11:50.057 "trtype": "VFIOUSER", 00:11:50.057 "adrfam": "IPv4", 00:11:50.057 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:50.057 "trsvcid": "0" 00:11:50.057 } 00:11:50.057 ], 00:11:50.057 "allow_any_host": true, 00:11:50.057 "hosts": [], 00:11:50.057 "serial_number": "SPDK1", 00:11:50.057 "model_number": "SPDK bdev Controller", 00:11:50.057 "max_namespaces": 32, 00:11:50.057 "min_cntlid": 1, 00:11:50.057 "max_cntlid": 65519, 00:11:50.057 "namespaces": [ 00:11:50.057 { 00:11:50.057 "nsid": 1, 00:11:50.057 "bdev_name": "Malloc1", 00:11:50.057 "name": "Malloc1", 00:11:50.057 "nguid": "60DAAA6A2D2C4992B7AAB62D8023977B", 00:11:50.057 "uuid": "60daaa6a-2d2c-4992-b7aa-b62d8023977b" 00:11:50.057 }, 00:11:50.057 { 00:11:50.057 "nsid": 2, 00:11:50.057 "bdev_name": "Malloc3", 00:11:50.057 "name": "Malloc3", 00:11:50.057 "nguid": "C2ECE10E07474BE6BDB29DB5F5EA94C1", 00:11:50.057 "uuid": "c2ece10e-0747-4be6-bdb2-9db5f5ea94c1" 00:11:50.057 } 00:11:50.057 ] 00:11:50.057 }, 00:11:50.057 { 00:11:50.057 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:50.057 "subtype": "NVMe", 00:11:50.057 "listen_addresses": [ 00:11:50.057 { 00:11:50.057 "trtype": "VFIOUSER", 00:11:50.057 "adrfam": "IPv4", 00:11:50.057 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:50.057 "trsvcid": "0" 00:11:50.057 } 00:11:50.057 ], 00:11:50.057 "allow_any_host": true, 00:11:50.057 "hosts": [], 00:11:50.057 "serial_number": "SPDK2", 00:11:50.057 "model_number": "SPDK bdev Controller", 00:11:50.057 "max_namespaces": 32, 00:11:50.057 "min_cntlid": 1, 00:11:50.057 "max_cntlid": 65519, 00:11:50.057 "namespaces": [ 00:11:50.057 { 00:11:50.057 "nsid": 1, 00:11:50.057 "bdev_name": "Malloc2", 00:11:50.057 "name": "Malloc2", 00:11:50.057 "nguid": "A61FA4CE46E44446A343D1C22B92EB02", 00:11:50.057 "uuid": "a61fa4ce-46e4-4446-a343-d1c22b92eb02" 00:11:50.057 }, 00:11:50.057 { 00:11:50.057 "nsid": 2, 00:11:50.057 "bdev_name": "Malloc4", 00:11:50.057 "name": "Malloc4", 00:11:50.057 "nguid": "224D62E01804462AA2BE66E3C08A241E", 00:11:50.057 "uuid": "224d62e0-1804-462a-a2be-66e3c08a241e" 00:11:50.057 } 00:11:50.057 ] 00:11:50.057 } 00:11:50.057 ] 00:11:50.057 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4125736 00:11:50.057 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:50.057 22:28:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4118022 00:11:50.057 22:28:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 4118022 ']' 00:11:50.057 22:28:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 4118022 00:11:50.057 22:28:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:50.057 22:28:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:50.057 22:28:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4118022 00:11:50.316 22:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:50.316 22:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:50.316 22:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4118022' 00:11:50.316 killing process with pid 4118022 00:11:50.316 22:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 4118022 00:11:50.316 22:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 4118022 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4125889 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4125889' 00:11:50.575 Process pid: 4125889 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4125889 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 4125889 ']' 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.575 22:28:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:50.575 [2024-07-15 22:28:14.343505] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:50.575 [2024-07-15 22:28:14.344374] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:11:50.575 [2024-07-15 22:28:14.344412] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.575 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.575 [2024-07-15 22:28:14.396726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.575 [2024-07-15 22:28:14.464719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.575 [2024-07-15 22:28:14.464761] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.575 [2024-07-15 22:28:14.464767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.575 [2024-07-15 22:28:14.464777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.575 [2024-07-15 22:28:14.464782] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.575 [2024-07-15 22:28:14.464856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.575 [2024-07-15 22:28:14.464881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.575 [2024-07-15 22:28:14.464965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.575 [2024-07-15 22:28:14.464966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.575 [2024-07-15 22:28:14.544887] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:11:50.575 [2024-07-15 22:28:14.545031] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:11:50.576 [2024-07-15 22:28:14.545267] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:11:50.576 [2024-07-15 22:28:14.545572] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:50.576 [2024-07-15 22:28:14.545803] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:11:51.509 22:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.509 22:28:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:51.509 22:28:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:52.445 22:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:52.445 22:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:52.445 22:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:52.445 22:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:52.445 22:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:52.445 22:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:52.703 Malloc1 00:11:52.703 22:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:52.962 22:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:52.962 22:28:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:53.220 22:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:53.220 22:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:53.220 22:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:53.478 Malloc2 00:11:53.478 22:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:53.478 22:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:53.737 22:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:53.995 22:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:11:53.995 22:28:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4125889 00:11:53.995 22:28:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 4125889 ']' 00:11:53.995 22:28:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 4125889 00:11:53.995 22:28:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:53.995 22:28:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.995 22:28:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4125889 00:11:53.995 22:28:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:53.995 22:28:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:53.995 22:28:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4125889' 00:11:53.995 killing process with pid 4125889 00:11:53.995 22:28:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 4125889 00:11:53.995 22:28:17 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 4125889 00:11:54.253 22:28:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:54.253 22:28:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:54.253 00:11:54.253 real 0m51.242s 00:11:54.253 user 3m23.026s 00:11:54.253 sys 0m3.549s 00:11:54.253 22:28:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.253 22:28:18 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:54.253 ************************************ 00:11:54.253 END TEST nvmf_vfio_user 00:11:54.253 ************************************ 00:11:54.253 22:28:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:54.253 22:28:18 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:54.253 22:28:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:54.253 22:28:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.253 22:28:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:54.253 ************************************ 00:11:54.253 START TEST nvmf_vfio_user_nvme_compliance 00:11:54.253 ************************************ 00:11:54.253 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:11:54.533 * Looking for test storage... 00:11:54.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:11:54.533 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=4126651 00:11:54.534 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 4126651' 00:11:54.534 Process pid: 4126651 00:11:54.534 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:54.534 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:54.534 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 4126651 00:11:54.534 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 4126651 ']' 00:11:54.534 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.534 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:54.534 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.534 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:54.534 22:28:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:54.534 [2024-07-15 22:28:18.316313] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:11:54.534 [2024-07-15 22:28:18.316365] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.534 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.534 [2024-07-15 22:28:18.370901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:54.534 [2024-07-15 22:28:18.450553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.534 [2024-07-15 22:28:18.450589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.534 [2024-07-15 22:28:18.450596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.534 [2024-07-15 22:28:18.450602] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.534 [2024-07-15 22:28:18.450607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.534 [2024-07-15 22:28:18.450642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.534 [2024-07-15 22:28:18.450675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.534 [2024-07-15 22:28:18.450676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.469 22:28:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:55.469 22:28:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:11:55.469 22:28:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:56.404 malloc0 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:11:56.404 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.405 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:56.405 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.405 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:11:56.405 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.405 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:11:56.405 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.405 22:28:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:11:56.405 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.405 00:11:56.405 00:11:56.405 CUnit - A unit testing framework for C - Version 2.1-3 00:11:56.405 http://cunit.sourceforge.net/ 00:11:56.405 00:11:56.405 00:11:56.405 Suite: nvme_compliance 00:11:56.405 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 22:28:20.356621] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.405 [2024-07-15 22:28:20.357958] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:11:56.405 [2024-07-15 22:28:20.357972] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:11:56.405 [2024-07-15 22:28:20.357979] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:11:56.405 [2024-07-15 22:28:20.359636] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.663 passed 00:11:56.663 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 22:28:20.437193] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.663 [2024-07-15 22:28:20.442233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.663 passed 00:11:56.664 Test: admin_identify_ns ...[2024-07-15 22:28:20.520687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.664 [2024-07-15 22:28:20.580240] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:11:56.664 [2024-07-15 22:28:20.588243] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:11:56.664 [2024-07-15 22:28:20.612334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.922 passed 00:11:56.922 Test: admin_get_features_mandatory_features ...[2024-07-15 22:28:20.685559] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.922 [2024-07-15 22:28:20.689588] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.922 passed 00:11:56.922 Test: admin_get_features_optional_features ...[2024-07-15 22:28:20.768060] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:56.922 [2024-07-15 22:28:20.771079] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:56.922 passed 00:11:56.922 Test: admin_set_features_number_of_queues ...[2024-07-15 22:28:20.849155] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.182 [2024-07-15 22:28:20.954315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.182 passed 00:11:57.182 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 22:28:21.028560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.182 [2024-07-15 22:28:21.031581] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.182 passed 00:11:57.182 Test: admin_get_log_page_with_lpo ...[2024-07-15 22:28:21.112707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.442 [2024-07-15 22:28:21.181236] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:11:57.442 [2024-07-15 22:28:21.194294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.442 passed 00:11:57.442 Test: fabric_property_get ...[2024-07-15 22:28:21.267437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.442 [2024-07-15 22:28:21.268668] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:11:57.442 [2024-07-15 22:28:21.272471] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.442 passed 00:11:57.442 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 22:28:21.347950] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.442 [2024-07-15 22:28:21.349185] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:11:57.442 [2024-07-15 22:28:21.352982] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.442 passed 00:11:57.701 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 22:28:21.429763] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.701 [2024-07-15 22:28:21.513234] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:57.701 [2024-07-15 22:28:21.529240] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:57.701 [2024-07-15 22:28:21.534330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.701 passed 00:11:57.701 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 22:28:21.612328] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.701 [2024-07-15 22:28:21.613558] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:11:57.701 [2024-07-15 22:28:21.615347] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.701 passed 00:11:57.960 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 22:28:21.693745] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.960 [2024-07-15 22:28:21.769234] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:11:57.960 [2024-07-15 22:28:21.792407] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:11:57.960 [2024-07-15 22:28:21.797451] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.960 passed 00:11:57.960 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 22:28:21.874302] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:57.960 [2024-07-15 22:28:21.875548] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:11:57.960 [2024-07-15 22:28:21.875575] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:11:57.960 [2024-07-15 22:28:21.877324] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:57.960 passed 00:11:58.219 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 22:28:21.954815] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:58.219 [2024-07-15 22:28:22.047231] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:11:58.219 [2024-07-15 22:28:22.055235] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:11:58.219 [2024-07-15 22:28:22.063234] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:11:58.219 [2024-07-15 22:28:22.071233] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:11:58.219 [2024-07-15 22:28:22.100314] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:58.219 passed 00:11:58.219 Test: admin_create_io_sq_verify_pc ...[2024-07-15 22:28:22.180318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:58.478 [2024-07-15 22:28:22.194241] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:11:58.478 [2024-07-15 22:28:22.211491] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:58.478 passed 00:11:58.478 Test: admin_create_io_qp_max_qps ...[2024-07-15 22:28:22.292053] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:11:59.857 [2024-07-15 22:28:23.398238] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:11:59.857 [2024-07-15 22:28:23.784371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:11:59.857 passed 00:12:00.117 Test: admin_create_io_sq_shared_cq ...[2024-07-15 22:28:23.861605] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:00.117 [2024-07-15 22:28:23.994232] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:00.117 [2024-07-15 22:28:24.031288] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:00.117 passed 00:12:00.117 00:12:00.117 Run Summary: Type Total Ran Passed Failed Inactive 00:12:00.117 suites 1 1 n/a 0 0 00:12:00.117 tests 18 18 18 0 0 00:12:00.117 asserts 360 360 360 0 n/a 00:12:00.117 00:12:00.117 Elapsed time = 1.512 seconds 00:12:00.117 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 4126651 00:12:00.117 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 4126651 ']' 00:12:00.117 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 4126651 00:12:00.117 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:00.117 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.117 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4126651 00:12:00.376 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:00.376 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:00.376 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4126651' 00:12:00.376 killing process with pid 4126651 00:12:00.376 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 4126651 00:12:00.376 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 4126651 00:12:00.376 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:00.376 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:00.376 00:12:00.376 real 0m6.179s 00:12:00.376 user 0m17.641s 00:12:00.376 sys 0m0.456s 00:12:00.376 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:00.376 22:28:24 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:00.376 ************************************ 00:12:00.376 END TEST nvmf_vfio_user_nvme_compliance 00:12:00.376 ************************************ 00:12:00.636 22:28:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:00.636 22:28:24 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:00.636 22:28:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:00.636 22:28:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.636 22:28:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:00.636 ************************************ 00:12:00.636 START TEST nvmf_vfio_user_fuzz 00:12:00.636 ************************************ 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:00.636 * Looking for test storage... 00:12:00.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:00.636 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:00.637 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4127692 00:12:00.637 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4127692' 00:12:00.637 Process pid: 4127692 00:12:00.637 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:00.637 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:00.637 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4127692 00:12:00.637 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 4127692 ']' 00:12:00.637 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.637 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:00.637 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.637 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:00.637 22:28:24 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:01.574 22:28:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:01.574 22:28:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:01.574 22:28:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:02.511 malloc0 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.511 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:02.512 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.512 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:02.512 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.512 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:02.512 22:28:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:34.625 Fuzzing completed. Shutting down the fuzz application 00:12:34.625 00:12:34.625 Dumping successful admin opcodes: 00:12:34.625 8, 9, 10, 24, 00:12:34.625 Dumping successful io opcodes: 00:12:34.625 0, 00:12:34.625 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1052269, total successful commands: 4161, random_seed: 1155042176 00:12:34.625 NS: 0x200003a1ef00 admin qp, Total commands completed: 258070, total successful commands: 2083, random_seed: 3716054656 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 4127692 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 4127692 ']' 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 4127692 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4127692 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4127692' 00:12:34.625 killing process with pid 4127692 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 4127692 00:12:34.625 22:28:56 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 4127692 00:12:34.625 22:28:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:34.625 22:28:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:34.625 00:12:34.625 real 0m32.747s 00:12:34.625 user 0m31.381s 00:12:34.625 sys 0m30.805s 00:12:34.625 22:28:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:34.625 22:28:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:34.625 ************************************ 00:12:34.625 END TEST nvmf_vfio_user_fuzz 00:12:34.625 ************************************ 00:12:34.625 22:28:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:34.625 22:28:57 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:34.625 22:28:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:34.625 22:28:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.625 22:28:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:34.625 ************************************ 00:12:34.625 START TEST nvmf_host_management 00:12:34.625 ************************************ 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:34.625 * Looking for test storage... 00:12:34.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.625 22:28:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:34.626 22:28:57 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:38.823 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:38.823 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.823 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:38.823 Found net devices under 0000:86:00.0: cvl_0_0 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:38.824 Found net devices under 0000:86:00.1: cvl_0_1 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:12:38.824 00:12:38.824 --- 10.0.0.2 ping statistics --- 00:12:38.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.824 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:12:38.824 00:12:38.824 --- 10.0.0.1 ping statistics --- 00:12:38.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.824 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4136147 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4136147 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4136147 ']' 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:38.824 22:29:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:38.824 [2024-07-15 22:29:02.675646] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:12:38.824 [2024-07-15 22:29:02.675688] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.824 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.824 [2024-07-15 22:29:02.733088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.084 [2024-07-15 22:29:02.814665] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.084 [2024-07-15 22:29:02.814700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.084 [2024-07-15 22:29:02.814707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.084 [2024-07-15 22:29:02.814714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.084 [2024-07-15 22:29:02.814719] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.084 [2024-07-15 22:29:02.814757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.084 [2024-07-15 22:29:02.814839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.084 [2024-07-15 22:29:02.814947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.084 [2024-07-15 22:29:02.814948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:39.652 [2024-07-15 22:29:03.533113] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:39.652 22:29:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:39.653 22:29:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:39.653 22:29:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:39.653 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.653 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:39.653 Malloc0 00:12:39.653 [2024-07-15 22:29:03.593007] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.653 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.653 22:29:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:39.653 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.653 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:39.911 22:29:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4136414 00:12:39.911 22:29:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4136414 /var/tmp/bdevperf.sock 00:12:39.911 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 4136414 ']' 00:12:39.911 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:39.911 22:29:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:39.911 22:29:03 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:39.911 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.911 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:39.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:39.911 22:29:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:39.911 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.912 22:29:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:39.912 22:29:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:39.912 22:29:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:39.912 22:29:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:39.912 { 00:12:39.912 "params": { 00:12:39.912 "name": "Nvme$subsystem", 00:12:39.912 "trtype": "$TEST_TRANSPORT", 00:12:39.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:39.912 "adrfam": "ipv4", 00:12:39.912 "trsvcid": "$NVMF_PORT", 00:12:39.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:39.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:39.912 "hdgst": ${hdgst:-false}, 00:12:39.912 "ddgst": ${ddgst:-false} 00:12:39.912 }, 00:12:39.912 "method": "bdev_nvme_attach_controller" 00:12:39.912 } 00:12:39.912 EOF 00:12:39.912 )") 00:12:39.912 22:29:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:39.912 22:29:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:39.912 22:29:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:39.912 22:29:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:39.912 "params": { 00:12:39.912 "name": "Nvme0", 00:12:39.912 "trtype": "tcp", 00:12:39.912 "traddr": "10.0.0.2", 00:12:39.912 "adrfam": "ipv4", 00:12:39.912 "trsvcid": "4420", 00:12:39.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:39.912 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:39.912 "hdgst": false, 00:12:39.912 "ddgst": false 00:12:39.912 }, 00:12:39.912 "method": "bdev_nvme_attach_controller" 00:12:39.912 }' 00:12:39.912 [2024-07-15 22:29:03.684948] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:12:39.912 [2024-07-15 22:29:03.684991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136414 ] 00:12:39.912 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.912 [2024-07-15 22:29:03.739411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.912 [2024-07-15 22:29:03.813294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.170 Running I/O for 10 seconds... 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=781 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 781 -ge 100 ']' 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.740 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:40.740 [2024-07-15 22:29:04.560214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560282] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560288] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560294] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560306] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560330] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560348] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560360] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560366] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.740 [2024-07-15 22:29:04.560372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560378] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560384] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560426] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560444] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560478] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560484] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560491] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560511] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560516] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560529] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560541] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560547] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560559] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560578] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.560584] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43460 is same with the state(5) to be set 00:12:40.741 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.741 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:40.741 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.741 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:40.741 22:29:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.741 22:29:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:40.741 [2024-07-15 22:29:04.575666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.741 [2024-07-15 22:29:04.575700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.575709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.741 [2024-07-15 22:29:04.575716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.575724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.741 [2024-07-15 22:29:04.575732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.575740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:40.741 [2024-07-15 22:29:04.575747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.575758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b2980 is same with the state(5) to be set 00:12:40.741 [2024-07-15 22:29:04.575838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.575849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.575864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.575872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.575881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.575889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.575898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.575906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.575915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.575924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.575933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.575941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.575952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.575960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.575969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.575978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.575987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.575995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.576005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.576013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.576022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.576030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.576040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.576048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.576061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.576069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.576079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.576087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.741 [2024-07-15 22:29:04.576097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.741 [2024-07-15 22:29:04.576107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.742 [2024-07-15 22:29:04.576687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.742 [2024-07-15 22:29:04.576696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.576988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:40.743 [2024-07-15 22:29:04.576998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:40.743 [2024-07-15 22:29:04.577059] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15c3b20 was disconnected and freed. reset controller. 00:12:40.743 [2024-07-15 22:29:04.577959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:40.743 task offset: 114432 on job bdev=Nvme0n1 fails 00:12:40.743 00:12:40.743 Latency(us) 00:12:40.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.743 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:40.743 Job: Nvme0n1 ended in about 0.57 seconds with error 00:12:40.743 Verification LBA range: start 0x0 length 0x400 00:12:40.743 Nvme0n1 : 0.57 1572.57 98.29 112.58 0.00 37212.24 1652.65 33508.84 00:12:40.743 =================================================================================================================== 00:12:40.743 Total : 1572.57 98.29 112.58 0.00 37212.24 1652.65 33508.84 00:12:40.743 [2024-07-15 22:29:04.579537] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:40.743 [2024-07-15 22:29:04.579552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b2980 (9): Bad file descriptor 00:12:40.743 [2024-07-15 22:29:04.588643] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4136414 00:12:41.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4136414) - No such process 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:41.681 { 00:12:41.681 "params": { 00:12:41.681 "name": "Nvme$subsystem", 00:12:41.681 "trtype": "$TEST_TRANSPORT", 00:12:41.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:41.681 "adrfam": "ipv4", 00:12:41.681 "trsvcid": "$NVMF_PORT", 00:12:41.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:41.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:41.681 "hdgst": ${hdgst:-false}, 00:12:41.681 "ddgst": ${ddgst:-false} 00:12:41.681 }, 00:12:41.681 "method": "bdev_nvme_attach_controller" 00:12:41.681 } 00:12:41.681 EOF 00:12:41.681 )") 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:41.681 22:29:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:41.681 "params": { 00:12:41.681 "name": "Nvme0", 00:12:41.681 "trtype": "tcp", 00:12:41.681 "traddr": "10.0.0.2", 00:12:41.681 "adrfam": "ipv4", 00:12:41.681 "trsvcid": "4420", 00:12:41.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:41.681 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:41.681 "hdgst": false, 00:12:41.681 "ddgst": false 00:12:41.681 }, 00:12:41.681 "method": "bdev_nvme_attach_controller" 00:12:41.681 }' 00:12:41.681 [2024-07-15 22:29:05.625765] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:12:41.681 [2024-07-15 22:29:05.625813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136668 ] 00:12:41.681 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.940 [2024-07-15 22:29:05.679399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.940 [2024-07-15 22:29:05.749946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.200 Running I/O for 1 seconds... 00:12:43.593 00:12:43.593 Latency(us) 00:12:43.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.593 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:43.593 Verification LBA range: start 0x0 length 0x400 00:12:43.593 Nvme0n1 : 1.01 1793.32 112.08 0.00 0.00 35099.65 1495.93 33736.79 00:12:43.593 =================================================================================================================== 00:12:43.593 Total : 1793.32 112.08 0.00 0.00 35099.65 1495.93 33736.79 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:43.593 rmmod nvme_tcp 00:12:43.593 rmmod nvme_fabrics 00:12:43.593 rmmod nvme_keyring 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4136147 ']' 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4136147 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 4136147 ']' 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 4136147 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4136147 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4136147' 00:12:43.593 killing process with pid 4136147 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 4136147 00:12:43.593 22:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 4136147 00:12:43.853 [2024-07-15 22:29:07.591854] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:43.853 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.853 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.853 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.853 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.853 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.853 22:29:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.853 22:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.853 22:29:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.755 22:29:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.756 22:29:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:45.756 00:12:45.756 real 0m12.479s 00:12:45.756 user 0m23.136s 00:12:45.756 sys 0m5.172s 00:12:45.756 22:29:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:45.756 22:29:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:45.756 ************************************ 00:12:45.756 END TEST nvmf_host_management 00:12:45.756 ************************************ 00:12:45.756 22:29:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:45.756 22:29:09 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:45.756 22:29:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:45.756 22:29:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.756 22:29:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:46.015 ************************************ 00:12:46.015 START TEST nvmf_lvol 00:12:46.015 ************************************ 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:46.015 * Looking for test storage... 00:12:46.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:46.015 22:29:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:51.288 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.288 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:51.289 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:51.289 Found net devices under 0000:86:00.0: cvl_0_0 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:51.289 Found net devices under 0000:86:00.1: cvl_0_1 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:51.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:12:51.289 00:12:51.289 --- 10.0.0.2 ping statistics --- 00:12:51.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.289 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:12:51.289 00:12:51.289 --- 10.0.0.1 ping statistics --- 00:12:51.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.289 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4140420 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4140420 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 4140420 ']' 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.289 22:29:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:51.289 [2024-07-15 22:29:14.993815] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:12:51.289 [2024-07-15 22:29:14.993858] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.289 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.289 [2024-07-15 22:29:15.050584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:51.289 [2024-07-15 22:29:15.131529] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.289 [2024-07-15 22:29:15.131564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.289 [2024-07-15 22:29:15.131572] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.289 [2024-07-15 22:29:15.131580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.289 [2024-07-15 22:29:15.131586] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.289 [2024-07-15 22:29:15.131637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.289 [2024-07-15 22:29:15.131653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.289 [2024-07-15 22:29:15.131655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.858 22:29:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:51.858 22:29:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:12:51.858 22:29:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:51.858 22:29:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:51.858 22:29:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:51.858 22:29:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.858 22:29:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:52.117 [2024-07-15 22:29:15.988331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.117 22:29:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.376 22:29:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:52.376 22:29:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.636 22:29:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:52.636 22:29:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:52.636 22:29:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:52.895 22:29:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4448805f-d91e-4b59-939a-7ca4698fdfd8 00:12:52.895 22:29:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4448805f-d91e-4b59-939a-7ca4698fdfd8 lvol 20 00:12:53.153 22:29:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c15ea517-4658-40b6-89eb-8e7662ad4a22 00:12:53.153 22:29:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:53.412 22:29:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c15ea517-4658-40b6-89eb-8e7662ad4a22 00:12:53.412 22:29:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:53.671 [2024-07-15 22:29:17.497243] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.671 22:29:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:53.930 22:29:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4140912 00:12:53.930 22:29:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:53.930 22:29:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:53.930 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.866 22:29:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c15ea517-4658-40b6-89eb-8e7662ad4a22 MY_SNAPSHOT 00:12:55.124 22:29:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=12688f08-3b4b-4332-8836-5ed301916803 00:12:55.124 22:29:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c15ea517-4658-40b6-89eb-8e7662ad4a22 30 00:12:55.382 22:29:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 12688f08-3b4b-4332-8836-5ed301916803 MY_CLONE 00:12:55.641 22:29:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=166fee43-c4dc-4a04-b789-ccd81d82c3d8 00:12:55.641 22:29:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 166fee43-c4dc-4a04-b789-ccd81d82c3d8 00:12:56.209 22:29:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4140912 00:13:04.374 Initializing NVMe Controllers 00:13:04.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:04.374 Controller IO queue size 128, less than required. 00:13:04.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:04.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:04.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:04.374 Initialization complete. Launching workers. 00:13:04.374 ======================================================== 00:13:04.374 Latency(us) 00:13:04.374 Device Information : IOPS MiB/s Average min max 00:13:04.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12447.74 48.62 10282.92 1424.57 54054.52 00:13:04.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12278.94 47.96 10426.84 3663.24 52646.62 00:13:04.374 ======================================================== 00:13:04.374 Total : 24726.69 96.59 10354.39 1424.57 54054.52 00:13:04.374 00:13:04.374 22:29:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:04.374 22:29:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c15ea517-4658-40b6-89eb-8e7662ad4a22 00:13:04.631 22:29:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4448805f-d91e-4b59-939a-7ca4698fdfd8 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.889 rmmod nvme_tcp 00:13:04.889 rmmod nvme_fabrics 00:13:04.889 rmmod nvme_keyring 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4140420 ']' 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4140420 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 4140420 ']' 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 4140420 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4140420 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4140420' 00:13:04.889 killing process with pid 4140420 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 4140420 00:13:04.889 22:29:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 4140420 00:13:05.148 22:29:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:05.148 22:29:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:05.148 22:29:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:05.148 22:29:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.148 22:29:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:05.148 22:29:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.148 22:29:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.148 22:29:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:07.681 00:13:07.681 real 0m21.341s 00:13:07.681 user 1m4.038s 00:13:07.681 sys 0m6.488s 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:07.681 ************************************ 00:13:07.681 END TEST nvmf_lvol 00:13:07.681 ************************************ 00:13:07.681 22:29:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:07.681 22:29:31 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:07.681 22:29:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:07.681 22:29:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.681 22:29:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:07.681 ************************************ 00:13:07.681 START TEST nvmf_lvs_grow 00:13:07.681 ************************************ 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:07.681 * Looking for test storage... 00:13:07.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.681 22:29:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:12.956 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:12.957 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:12.957 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:12.957 Found net devices under 0000:86:00.0: cvl_0_0 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:12.957 Found net devices under 0000:86:00.1: cvl_0_1 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.957 22:29:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:12.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:13:12.957 00:13:12.957 --- 10.0.0.2 ping statistics --- 00:13:12.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.957 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:13:12.957 00:13:12.957 --- 10.0.0.1 ping statistics --- 00:13:12.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.957 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4146050 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4146050 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 4146050 ']' 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:12.957 22:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:12.957 [2024-07-15 22:29:36.288744] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:13:12.957 [2024-07-15 22:29:36.288790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.957 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.957 [2024-07-15 22:29:36.345908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.957 [2024-07-15 22:29:36.425268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.957 [2024-07-15 22:29:36.425302] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.957 [2024-07-15 22:29:36.425309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.957 [2024-07-15 22:29:36.425315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.957 [2024-07-15 22:29:36.425321] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.957 [2024-07-15 22:29:36.425338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.216 22:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.216 22:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:13.216 22:29:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:13.216 22:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:13.216 22:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:13.216 22:29:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.216 22:29:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:13.474 [2024-07-15 22:29:37.281007] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:13.474 ************************************ 00:13:13.474 START TEST lvs_grow_clean 00:13:13.474 ************************************ 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:13.474 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:13.732 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:13.732 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:13.990 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4919339e-3ad1-43ec-80a9-23bb438ceecc 00:13:13.990 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4919339e-3ad1-43ec-80a9-23bb438ceecc 00:13:13.990 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:13.990 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:13.990 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:13.991 22:29:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4919339e-3ad1-43ec-80a9-23bb438ceecc lvol 150 00:13:14.249 22:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c3c8f45e-e9ad-4474-b0ff-13dff8103050 00:13:14.249 22:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:14.249 22:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:14.507 [2024-07-15 22:29:38.220799] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:14.507 [2024-07-15 22:29:38.220845] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:14.507 true 00:13:14.507 22:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4919339e-3ad1-43ec-80a9-23bb438ceecc 00:13:14.507 22:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:14.507 22:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:14.507 22:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:14.764 22:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c3c8f45e-e9ad-4474-b0ff-13dff8103050 00:13:15.022 22:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:15.022 [2024-07-15 22:29:38.902845] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.022 22:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:15.280 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4146557 00:13:15.280 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:15.280 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4146557 /var/tmp/bdevperf.sock 00:13:15.280 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 4146557 ']' 00:13:15.280 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:15.280 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.280 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:15.280 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:15.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:15.280 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.280 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:15.280 [2024-07-15 22:29:39.131552] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:13:15.280 [2024-07-15 22:29:39.131601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4146557 ] 00:13:15.280 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.280 [2024-07-15 22:29:39.185400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.538 [2024-07-15 22:29:39.264650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.105 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.105 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:16.105 22:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:16.363 Nvme0n1 00:13:16.363 22:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:16.622 [ 00:13:16.622 { 00:13:16.622 "name": "Nvme0n1", 00:13:16.622 "aliases": [ 00:13:16.622 "c3c8f45e-e9ad-4474-b0ff-13dff8103050" 00:13:16.622 ], 00:13:16.622 "product_name": "NVMe disk", 00:13:16.622 "block_size": 4096, 00:13:16.622 "num_blocks": 38912, 00:13:16.622 "uuid": "c3c8f45e-e9ad-4474-b0ff-13dff8103050", 00:13:16.622 "assigned_rate_limits": { 00:13:16.622 "rw_ios_per_sec": 0, 00:13:16.622 "rw_mbytes_per_sec": 0, 00:13:16.622 "r_mbytes_per_sec": 0, 00:13:16.622 "w_mbytes_per_sec": 0 00:13:16.622 }, 00:13:16.622 "claimed": false, 00:13:16.622 "zoned": false, 00:13:16.622 "supported_io_types": { 00:13:16.622 "read": true, 00:13:16.622 "write": true, 00:13:16.622 "unmap": true, 00:13:16.622 "flush": true, 00:13:16.622 "reset": true, 00:13:16.622 "nvme_admin": true, 00:13:16.622 "nvme_io": true, 00:13:16.622 "nvme_io_md": false, 00:13:16.622 "write_zeroes": true, 00:13:16.622 "zcopy": false, 00:13:16.622 "get_zone_info": false, 00:13:16.622 "zone_management": false, 00:13:16.622 "zone_append": false, 00:13:16.622 "compare": true, 00:13:16.622 "compare_and_write": true, 00:13:16.622 "abort": true, 00:13:16.622 "seek_hole": false, 00:13:16.622 "seek_data": false, 00:13:16.622 "copy": true, 00:13:16.622 "nvme_iov_md": false 00:13:16.622 }, 00:13:16.622 "memory_domains": [ 00:13:16.622 { 00:13:16.622 "dma_device_id": "system", 00:13:16.622 "dma_device_type": 1 00:13:16.622 } 00:13:16.622 ], 00:13:16.622 "driver_specific": { 00:13:16.622 "nvme": [ 00:13:16.622 { 00:13:16.622 "trid": { 00:13:16.622 "trtype": "TCP", 00:13:16.622 "adrfam": "IPv4", 00:13:16.622 "traddr": "10.0.0.2", 00:13:16.622 "trsvcid": "4420", 00:13:16.622 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:16.622 }, 00:13:16.622 "ctrlr_data": { 00:13:16.622 "cntlid": 1, 00:13:16.622 "vendor_id": "0x8086", 00:13:16.622 "model_number": "SPDK bdev Controller", 00:13:16.622 "serial_number": "SPDK0", 00:13:16.622 "firmware_revision": "24.09", 00:13:16.622 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:16.622 "oacs": { 00:13:16.622 "security": 0, 00:13:16.622 "format": 0, 00:13:16.622 "firmware": 0, 00:13:16.622 "ns_manage": 0 00:13:16.622 }, 00:13:16.622 "multi_ctrlr": true, 00:13:16.622 "ana_reporting": false 00:13:16.622 }, 00:13:16.622 "vs": { 00:13:16.622 "nvme_version": "1.3" 00:13:16.622 }, 00:13:16.622 "ns_data": { 00:13:16.622 "id": 1, 00:13:16.622 "can_share": true 00:13:16.622 } 00:13:16.622 } 00:13:16.622 ], 00:13:16.622 "mp_policy": "active_passive" 00:13:16.622 } 00:13:16.622 } 00:13:16.622 ] 00:13:16.622 22:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4146786 00:13:16.622 22:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:16.622 22:29:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:16.622 Running I/O for 10 seconds... 00:13:17.558 Latency(us) 00:13:17.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.558 Nvme0n1 : 1.00 21990.00 85.90 0.00 0.00 0.00 0.00 0.00 00:13:17.558 =================================================================================================================== 00:13:17.558 Total : 21990.00 85.90 0.00 0.00 0.00 0.00 0.00 00:13:17.558 00:13:18.494 22:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4919339e-3ad1-43ec-80a9-23bb438ceecc 00:13:18.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.494 Nvme0n1 : 2.00 22155.00 86.54 0.00 0.00 0.00 0.00 0.00 00:13:18.494 =================================================================================================================== 00:13:18.495 Total : 22155.00 86.54 0.00 0.00 0.00 0.00 0.00 00:13:18.495 00:13:18.754 true 00:13:18.754 22:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4919339e-3ad1-43ec-80a9-23bb438ceecc 00:13:18.754 22:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:18.754 22:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:18.754 22:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:18.754 22:29:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4146786 00:13:19.690 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:19.690 Nvme0n1 : 3.00 22215.33 86.78 0.00 0.00 0.00 0.00 0.00 00:13:19.690 =================================================================================================================== 00:13:19.690 Total : 22215.33 86.78 0.00 0.00 0.00 0.00 0.00 00:13:19.690 00:13:20.627 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.627 Nvme0n1 : 4.00 22279.50 87.03 0.00 0.00 0.00 0.00 0.00 00:13:20.627 =================================================================================================================== 00:13:20.627 Total : 22279.50 87.03 0.00 0.00 0.00 0.00 0.00 00:13:20.627 00:13:21.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:21.564 Nvme0n1 : 5.00 22322.80 87.20 0.00 0.00 0.00 0.00 0.00 00:13:21.564 =================================================================================================================== 00:13:21.564 Total : 22322.80 87.20 0.00 0.00 0.00 0.00 0.00 00:13:21.564 00:13:22.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:22.500 Nvme0n1 : 6.00 22361.00 87.35 0.00 0.00 0.00 0.00 0.00 00:13:22.500 =================================================================================================================== 00:13:22.500 Total : 22361.00 87.35 0.00 0.00 0.00 0.00 0.00 00:13:22.500 00:13:23.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:23.908 Nvme0n1 : 7.00 22389.43 87.46 0.00 0.00 0.00 0.00 0.00 00:13:23.908 =================================================================================================================== 00:13:23.908 Total : 22389.43 87.46 0.00 0.00 0.00 0.00 0.00 00:13:23.908 00:13:24.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.845 Nvme0n1 : 8.00 22409.75 87.54 0.00 0.00 0.00 0.00 0.00 00:13:24.845 =================================================================================================================== 00:13:24.845 Total : 22409.75 87.54 0.00 0.00 0.00 0.00 0.00 00:13:24.845 00:13:25.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:25.783 Nvme0n1 : 9.00 22403.33 87.51 0.00 0.00 0.00 0.00 0.00 00:13:25.783 =================================================================================================================== 00:13:25.783 Total : 22403.33 87.51 0.00 0.00 0.00 0.00 0.00 00:13:25.783 00:13:26.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.723 Nvme0n1 : 10.00 22408.60 87.53 0.00 0.00 0.00 0.00 0.00 00:13:26.723 =================================================================================================================== 00:13:26.723 Total : 22408.60 87.53 0.00 0.00 0.00 0.00 0.00 00:13:26.723 00:13:26.723 00:13:26.723 Latency(us) 00:13:26.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.723 Nvme0n1 : 10.01 22408.19 87.53 0.00 0.00 5708.19 4388.06 15842.62 00:13:26.723 =================================================================================================================== 00:13:26.723 Total : 22408.19 87.53 0.00 0.00 5708.19 4388.06 15842.62 00:13:26.723 0 00:13:26.723 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4146557 00:13:26.723 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 4146557 ']' 00:13:26.723 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 4146557 00:13:26.723 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:26.723 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.723 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4146557 00:13:26.723 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:26.723 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:26.723 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4146557' 00:13:26.723 killing process with pid 4146557 00:13:26.723 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 4146557 00:13:26.723 Received shutdown signal, test time was about 10.000000 seconds 00:13:26.723 00:13:26.723 Latency(us) 00:13:26.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.723 =================================================================================================================== 00:13:26.723 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:26.723 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 4146557 00:13:26.982 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:26.982 22:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:27.240 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:27.240 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4919339e-3ad1-43ec-80a9-23bb438ceecc 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:27.499 [2024-07-15 22:29:51.404764] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4919339e-3ad1-43ec-80a9-23bb438ceecc 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4919339e-3ad1-43ec-80a9-23bb438ceecc 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:27.499 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4919339e-3ad1-43ec-80a9-23bb438ceecc 00:13:27.759 request: 00:13:27.759 { 00:13:27.759 "uuid": "4919339e-3ad1-43ec-80a9-23bb438ceecc", 00:13:27.759 "method": "bdev_lvol_get_lvstores", 00:13:27.759 "req_id": 1 00:13:27.759 } 00:13:27.759 Got JSON-RPC error response 00:13:27.759 response: 00:13:27.759 { 00:13:27.759 "code": -19, 00:13:27.759 "message": "No such device" 00:13:27.759 } 00:13:27.759 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:27.759 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:27.759 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:27.759 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:27.759 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:28.018 aio_bdev 00:13:28.018 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c3c8f45e-e9ad-4474-b0ff-13dff8103050 00:13:28.018 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=c3c8f45e-e9ad-4474-b0ff-13dff8103050 00:13:28.018 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:28.018 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:28.018 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:28.018 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:28.018 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:28.018 22:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c3c8f45e-e9ad-4474-b0ff-13dff8103050 -t 2000 00:13:28.277 [ 00:13:28.277 { 00:13:28.277 "name": "c3c8f45e-e9ad-4474-b0ff-13dff8103050", 00:13:28.277 "aliases": [ 00:13:28.277 "lvs/lvol" 00:13:28.277 ], 00:13:28.277 "product_name": "Logical Volume", 00:13:28.277 "block_size": 4096, 00:13:28.277 "num_blocks": 38912, 00:13:28.277 "uuid": "c3c8f45e-e9ad-4474-b0ff-13dff8103050", 00:13:28.277 "assigned_rate_limits": { 00:13:28.277 "rw_ios_per_sec": 0, 00:13:28.277 "rw_mbytes_per_sec": 0, 00:13:28.277 "r_mbytes_per_sec": 0, 00:13:28.277 "w_mbytes_per_sec": 0 00:13:28.277 }, 00:13:28.277 "claimed": false, 00:13:28.277 "zoned": false, 00:13:28.277 "supported_io_types": { 00:13:28.277 "read": true, 00:13:28.277 "write": true, 00:13:28.277 "unmap": true, 00:13:28.277 "flush": false, 00:13:28.277 "reset": true, 00:13:28.277 "nvme_admin": false, 00:13:28.277 "nvme_io": false, 00:13:28.277 "nvme_io_md": false, 00:13:28.277 "write_zeroes": true, 00:13:28.277 "zcopy": false, 00:13:28.277 "get_zone_info": false, 00:13:28.277 "zone_management": false, 00:13:28.277 "zone_append": false, 00:13:28.277 "compare": false, 00:13:28.277 "compare_and_write": false, 00:13:28.277 "abort": false, 00:13:28.277 "seek_hole": true, 00:13:28.277 "seek_data": true, 00:13:28.277 "copy": false, 00:13:28.277 "nvme_iov_md": false 00:13:28.277 }, 00:13:28.277 "driver_specific": { 00:13:28.277 "lvol": { 00:13:28.277 "lvol_store_uuid": "4919339e-3ad1-43ec-80a9-23bb438ceecc", 00:13:28.277 "base_bdev": "aio_bdev", 00:13:28.277 "thin_provision": false, 00:13:28.277 "num_allocated_clusters": 38, 00:13:28.277 "snapshot": false, 00:13:28.277 "clone": false, 00:13:28.277 "esnap_clone": false 00:13:28.277 } 00:13:28.278 } 00:13:28.278 } 00:13:28.278 ] 00:13:28.278 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:28.278 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4919339e-3ad1-43ec-80a9-23bb438ceecc 00:13:28.278 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:28.536 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:28.536 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4919339e-3ad1-43ec-80a9-23bb438ceecc 00:13:28.536 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:28.536 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:28.536 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c3c8f45e-e9ad-4474-b0ff-13dff8103050 00:13:28.795 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4919339e-3ad1-43ec-80a9-23bb438ceecc 00:13:29.053 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:29.053 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.053 00:13:29.053 real 0m15.645s 00:13:29.053 user 0m15.320s 00:13:29.053 sys 0m1.420s 00:13:29.053 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.053 22:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:29.053 ************************************ 00:13:29.053 END TEST lvs_grow_clean 00:13:29.053 ************************************ 00:13:29.053 22:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:29.053 22:29:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:29.053 22:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:29.053 22:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.054 22:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:29.312 ************************************ 00:13:29.312 START TEST lvs_grow_dirty 00:13:29.312 ************************************ 00:13:29.312 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:29.312 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:29.312 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:29.312 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:29.312 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:29.312 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:29.312 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:29.312 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.312 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.312 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:29.312 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:29.312 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:29.570 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:29.571 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:29.571 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:29.829 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:29.829 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:29.829 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 lvol 150 00:13:29.829 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=cfe8df05-95ac-4af6-a88e-7c0267a5490f 00:13:29.829 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.829 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:30.088 [2024-07-15 22:29:53.915916] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:30.088 [2024-07-15 22:29:53.915960] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:30.088 true 00:13:30.088 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:30.088 22:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:30.347 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:30.347 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:30.347 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cfe8df05-95ac-4af6-a88e-7c0267a5490f 00:13:30.606 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:30.863 [2024-07-15 22:29:54.577967] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.863 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:30.863 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4149185 00:13:30.863 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:30.863 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:30.863 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4149185 /var/tmp/bdevperf.sock 00:13:30.863 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4149185 ']' 00:13:30.863 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:30.863 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.863 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:30.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:30.863 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.863 22:29:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:30.863 [2024-07-15 22:29:54.811713] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:13:30.863 [2024-07-15 22:29:54.811763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4149185 ] 00:13:30.863 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.122 [2024-07-15 22:29:54.866150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.122 [2024-07-15 22:29:54.945971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.689 22:29:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:31.689 22:29:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:31.689 22:29:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:32.256 Nvme0n1 00:13:32.256 22:29:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:32.256 [ 00:13:32.256 { 00:13:32.256 "name": "Nvme0n1", 00:13:32.256 "aliases": [ 00:13:32.256 "cfe8df05-95ac-4af6-a88e-7c0267a5490f" 00:13:32.256 ], 00:13:32.256 "product_name": "NVMe disk", 00:13:32.256 "block_size": 4096, 00:13:32.256 "num_blocks": 38912, 00:13:32.256 "uuid": "cfe8df05-95ac-4af6-a88e-7c0267a5490f", 00:13:32.256 "assigned_rate_limits": { 00:13:32.256 "rw_ios_per_sec": 0, 00:13:32.256 "rw_mbytes_per_sec": 0, 00:13:32.256 "r_mbytes_per_sec": 0, 00:13:32.256 "w_mbytes_per_sec": 0 00:13:32.256 }, 00:13:32.256 "claimed": false, 00:13:32.256 "zoned": false, 00:13:32.256 "supported_io_types": { 00:13:32.256 "read": true, 00:13:32.256 "write": true, 00:13:32.256 "unmap": true, 00:13:32.256 "flush": true, 00:13:32.256 "reset": true, 00:13:32.256 "nvme_admin": true, 00:13:32.256 "nvme_io": true, 00:13:32.256 "nvme_io_md": false, 00:13:32.256 "write_zeroes": true, 00:13:32.256 "zcopy": false, 00:13:32.256 "get_zone_info": false, 00:13:32.256 "zone_management": false, 00:13:32.256 "zone_append": false, 00:13:32.256 "compare": true, 00:13:32.256 "compare_and_write": true, 00:13:32.256 "abort": true, 00:13:32.256 "seek_hole": false, 00:13:32.256 "seek_data": false, 00:13:32.256 "copy": true, 00:13:32.256 "nvme_iov_md": false 00:13:32.256 }, 00:13:32.256 "memory_domains": [ 00:13:32.256 { 00:13:32.256 "dma_device_id": "system", 00:13:32.256 "dma_device_type": 1 00:13:32.256 } 00:13:32.256 ], 00:13:32.256 "driver_specific": { 00:13:32.256 "nvme": [ 00:13:32.256 { 00:13:32.256 "trid": { 00:13:32.256 "trtype": "TCP", 00:13:32.256 "adrfam": "IPv4", 00:13:32.256 "traddr": "10.0.0.2", 00:13:32.256 "trsvcid": "4420", 00:13:32.256 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:32.256 }, 00:13:32.256 "ctrlr_data": { 00:13:32.256 "cntlid": 1, 00:13:32.256 "vendor_id": "0x8086", 00:13:32.256 "model_number": "SPDK bdev Controller", 00:13:32.256 "serial_number": "SPDK0", 00:13:32.256 "firmware_revision": "24.09", 00:13:32.256 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:32.256 "oacs": { 00:13:32.256 "security": 0, 00:13:32.256 "format": 0, 00:13:32.256 "firmware": 0, 00:13:32.256 "ns_manage": 0 00:13:32.256 }, 00:13:32.256 "multi_ctrlr": true, 00:13:32.256 "ana_reporting": false 00:13:32.256 }, 00:13:32.256 "vs": { 00:13:32.256 "nvme_version": "1.3" 00:13:32.256 }, 00:13:32.256 "ns_data": { 00:13:32.256 "id": 1, 00:13:32.256 "can_share": true 00:13:32.256 } 00:13:32.256 } 00:13:32.256 ], 00:13:32.256 "mp_policy": "active_passive" 00:13:32.256 } 00:13:32.256 } 00:13:32.256 ] 00:13:32.256 22:29:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4149423 00:13:32.256 22:29:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:32.256 22:29:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:32.514 Running I/O for 10 seconds... 00:13:33.449 Latency(us) 00:13:33.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:33.449 Nvme0n1 : 1.00 22058.00 86.16 0.00 0.00 0.00 0.00 0.00 00:13:33.449 =================================================================================================================== 00:13:33.449 Total : 22058.00 86.16 0.00 0.00 0.00 0.00 0.00 00:13:33.449 00:13:34.385 22:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:34.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.385 Nvme0n1 : 2.00 22205.00 86.74 0.00 0.00 0.00 0.00 0.00 00:13:34.385 =================================================================================================================== 00:13:34.385 Total : 22205.00 86.74 0.00 0.00 0.00 0.00 0.00 00:13:34.385 00:13:34.385 true 00:13:34.644 22:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:34.644 22:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:34.644 22:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:34.644 22:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:34.644 22:29:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4149423 00:13:35.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.579 Nvme0n1 : 3.00 22254.00 86.93 0.00 0.00 0.00 0.00 0.00 00:13:35.579 =================================================================================================================== 00:13:35.579 Total : 22254.00 86.93 0.00 0.00 0.00 0.00 0.00 00:13:35.579 00:13:36.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.517 Nvme0n1 : 4.00 22286.50 87.06 0.00 0.00 0.00 0.00 0.00 00:13:36.517 =================================================================================================================== 00:13:36.517 Total : 22286.50 87.06 0.00 0.00 0.00 0.00 0.00 00:13:36.517 00:13:37.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:37.454 Nvme0n1 : 5.00 22272.40 87.00 0.00 0.00 0.00 0.00 0.00 00:13:37.454 =================================================================================================================== 00:13:37.454 Total : 22272.40 87.00 0.00 0.00 0.00 0.00 0.00 00:13:37.454 00:13:38.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.389 Nvme0n1 : 6.00 22304.33 87.13 0.00 0.00 0.00 0.00 0.00 00:13:38.389 =================================================================================================================== 00:13:38.389 Total : 22304.33 87.13 0.00 0.00 0.00 0.00 0.00 00:13:38.389 00:13:39.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.326 Nvme0n1 : 7.00 22342.00 87.27 0.00 0.00 0.00 0.00 0.00 00:13:39.326 =================================================================================================================== 00:13:39.326 Total : 22342.00 87.27 0.00 0.00 0.00 0.00 0.00 00:13:39.326 00:13:40.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.703 Nvme0n1 : 8.00 22373.25 87.40 0.00 0.00 0.00 0.00 0.00 00:13:40.703 =================================================================================================================== 00:13:40.703 Total : 22373.25 87.40 0.00 0.00 0.00 0.00 0.00 00:13:40.703 00:13:41.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.641 Nvme0n1 : 9.00 22400.22 87.50 0.00 0.00 0.00 0.00 0.00 00:13:41.641 =================================================================================================================== 00:13:41.641 Total : 22400.22 87.50 0.00 0.00 0.00 0.00 0.00 00:13:41.641 00:13:42.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.649 Nvme0n1 : 10.00 22416.20 87.56 0.00 0.00 0.00 0.00 0.00 00:13:42.649 =================================================================================================================== 00:13:42.649 Total : 22416.20 87.56 0.00 0.00 0.00 0.00 0.00 00:13:42.649 00:13:42.649 00:13:42.649 Latency(us) 00:13:42.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.649 Nvme0n1 : 10.01 22416.48 87.56 0.00 0.00 5705.90 4359.57 13734.07 00:13:42.649 =================================================================================================================== 00:13:42.649 Total : 22416.48 87.56 0.00 0.00 5705.90 4359.57 13734.07 00:13:42.649 0 00:13:42.649 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4149185 00:13:42.649 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 4149185 ']' 00:13:42.649 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 4149185 00:13:42.649 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:42.649 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:42.649 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4149185 00:13:42.649 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:42.649 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:42.649 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4149185' 00:13:42.649 killing process with pid 4149185 00:13:42.649 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 4149185 00:13:42.649 Received shutdown signal, test time was about 10.000000 seconds 00:13:42.649 00:13:42.649 Latency(us) 00:13:42.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.649 =================================================================================================================== 00:13:42.649 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:42.649 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 4149185 00:13:42.649 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:42.909 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:43.169 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:43.169 22:30:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4146050 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4146050 00:13:43.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4146050 Killed "${NVMF_APP[@]}" "$@" 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=4151357 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 4151357 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 4151357 ']' 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.169 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:43.428 [2024-07-15 22:30:07.163234] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:13:43.428 [2024-07-15 22:30:07.163282] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.428 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.428 [2024-07-15 22:30:07.221216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.428 [2024-07-15 22:30:07.299327] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.428 [2024-07-15 22:30:07.299362] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.428 [2024-07-15 22:30:07.299369] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.428 [2024-07-15 22:30:07.299376] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.428 [2024-07-15 22:30:07.299381] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.428 [2024-07-15 22:30:07.299398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.997 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:43.997 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:43.997 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.997 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:43.997 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:44.256 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.256 22:30:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:44.256 [2024-07-15 22:30:08.164708] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:44.256 [2024-07-15 22:30:08.164788] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:44.256 [2024-07-15 22:30:08.164813] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:44.256 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:44.257 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev cfe8df05-95ac-4af6-a88e-7c0267a5490f 00:13:44.257 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=cfe8df05-95ac-4af6-a88e-7c0267a5490f 00:13:44.257 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:44.257 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:44.257 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:44.257 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:44.257 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:44.516 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cfe8df05-95ac-4af6-a88e-7c0267a5490f -t 2000 00:13:44.775 [ 00:13:44.775 { 00:13:44.775 "name": "cfe8df05-95ac-4af6-a88e-7c0267a5490f", 00:13:44.775 "aliases": [ 00:13:44.775 "lvs/lvol" 00:13:44.775 ], 00:13:44.775 "product_name": "Logical Volume", 00:13:44.775 "block_size": 4096, 00:13:44.775 "num_blocks": 38912, 00:13:44.775 "uuid": "cfe8df05-95ac-4af6-a88e-7c0267a5490f", 00:13:44.775 "assigned_rate_limits": { 00:13:44.775 "rw_ios_per_sec": 0, 00:13:44.775 "rw_mbytes_per_sec": 0, 00:13:44.775 "r_mbytes_per_sec": 0, 00:13:44.775 "w_mbytes_per_sec": 0 00:13:44.775 }, 00:13:44.775 "claimed": false, 00:13:44.775 "zoned": false, 00:13:44.775 "supported_io_types": { 00:13:44.775 "read": true, 00:13:44.775 "write": true, 00:13:44.775 "unmap": true, 00:13:44.775 "flush": false, 00:13:44.775 "reset": true, 00:13:44.775 "nvme_admin": false, 00:13:44.775 "nvme_io": false, 00:13:44.775 "nvme_io_md": false, 00:13:44.775 "write_zeroes": true, 00:13:44.775 "zcopy": false, 00:13:44.775 "get_zone_info": false, 00:13:44.775 "zone_management": false, 00:13:44.775 "zone_append": false, 00:13:44.775 "compare": false, 00:13:44.775 "compare_and_write": false, 00:13:44.775 "abort": false, 00:13:44.775 "seek_hole": true, 00:13:44.775 "seek_data": true, 00:13:44.775 "copy": false, 00:13:44.775 "nvme_iov_md": false 00:13:44.775 }, 00:13:44.775 "driver_specific": { 00:13:44.775 "lvol": { 00:13:44.775 "lvol_store_uuid": "3fdf31e9-ae8c-4505-ba58-9b837680ca65", 00:13:44.775 "base_bdev": "aio_bdev", 00:13:44.775 "thin_provision": false, 00:13:44.775 "num_allocated_clusters": 38, 00:13:44.775 "snapshot": false, 00:13:44.775 "clone": false, 00:13:44.775 "esnap_clone": false 00:13:44.775 } 00:13:44.775 } 00:13:44.775 } 00:13:44.775 ] 00:13:44.775 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:44.775 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:44.775 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:44.775 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:44.776 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:44.776 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:45.034 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:45.034 22:30:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:45.292 [2024-07-15 22:30:09.033470] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:45.292 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:45.292 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:45.292 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:45.292 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.292 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.292 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.292 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.292 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.292 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:45.293 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:45.293 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:45.293 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:45.293 request: 00:13:45.293 { 00:13:45.293 "uuid": "3fdf31e9-ae8c-4505-ba58-9b837680ca65", 00:13:45.293 "method": "bdev_lvol_get_lvstores", 00:13:45.293 "req_id": 1 00:13:45.293 } 00:13:45.293 Got JSON-RPC error response 00:13:45.293 response: 00:13:45.293 { 00:13:45.293 "code": -19, 00:13:45.293 "message": "No such device" 00:13:45.293 } 00:13:45.293 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:45.293 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:45.293 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:45.293 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:45.293 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:45.551 aio_bdev 00:13:45.551 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cfe8df05-95ac-4af6-a88e-7c0267a5490f 00:13:45.551 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=cfe8df05-95ac-4af6-a88e-7c0267a5490f 00:13:45.551 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:45.551 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:45.551 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:45.551 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:45.551 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:45.811 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cfe8df05-95ac-4af6-a88e-7c0267a5490f -t 2000 00:13:45.811 [ 00:13:45.811 { 00:13:45.811 "name": "cfe8df05-95ac-4af6-a88e-7c0267a5490f", 00:13:45.811 "aliases": [ 00:13:45.811 "lvs/lvol" 00:13:45.811 ], 00:13:45.811 "product_name": "Logical Volume", 00:13:45.811 "block_size": 4096, 00:13:45.811 "num_blocks": 38912, 00:13:45.811 "uuid": "cfe8df05-95ac-4af6-a88e-7c0267a5490f", 00:13:45.811 "assigned_rate_limits": { 00:13:45.811 "rw_ios_per_sec": 0, 00:13:45.811 "rw_mbytes_per_sec": 0, 00:13:45.811 "r_mbytes_per_sec": 0, 00:13:45.811 "w_mbytes_per_sec": 0 00:13:45.811 }, 00:13:45.811 "claimed": false, 00:13:45.811 "zoned": false, 00:13:45.811 "supported_io_types": { 00:13:45.811 "read": true, 00:13:45.811 "write": true, 00:13:45.811 "unmap": true, 00:13:45.811 "flush": false, 00:13:45.811 "reset": true, 00:13:45.811 "nvme_admin": false, 00:13:45.811 "nvme_io": false, 00:13:45.811 "nvme_io_md": false, 00:13:45.811 "write_zeroes": true, 00:13:45.811 "zcopy": false, 00:13:45.811 "get_zone_info": false, 00:13:45.811 "zone_management": false, 00:13:45.811 "zone_append": false, 00:13:45.811 "compare": false, 00:13:45.811 "compare_and_write": false, 00:13:45.811 "abort": false, 00:13:45.811 "seek_hole": true, 00:13:45.811 "seek_data": true, 00:13:45.811 "copy": false, 00:13:45.811 "nvme_iov_md": false 00:13:45.811 }, 00:13:45.811 "driver_specific": { 00:13:45.811 "lvol": { 00:13:45.811 "lvol_store_uuid": "3fdf31e9-ae8c-4505-ba58-9b837680ca65", 00:13:45.811 "base_bdev": "aio_bdev", 00:13:45.811 "thin_provision": false, 00:13:45.811 "num_allocated_clusters": 38, 00:13:45.811 "snapshot": false, 00:13:45.811 "clone": false, 00:13:45.811 "esnap_clone": false 00:13:45.811 } 00:13:45.811 } 00:13:45.811 } 00:13:45.811 ] 00:13:45.811 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:45.811 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:45.811 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:46.070 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:46.070 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:46.070 22:30:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:46.329 22:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:46.329 22:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cfe8df05-95ac-4af6-a88e-7c0267a5490f 00:13:46.329 22:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3fdf31e9-ae8c-4505-ba58-9b837680ca65 00:13:46.589 22:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:46.848 00:13:46.848 real 0m17.620s 00:13:46.848 user 0m44.588s 00:13:46.848 sys 0m4.227s 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:46.848 ************************************ 00:13:46.848 END TEST lvs_grow_dirty 00:13:46.848 ************************************ 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:46.848 nvmf_trace.0 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.848 rmmod nvme_tcp 00:13:46.848 rmmod nvme_fabrics 00:13:46.848 rmmod nvme_keyring 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 4151357 ']' 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 4151357 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 4151357 ']' 00:13:46.848 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 4151357 00:13:47.108 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:47.108 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:47.108 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4151357 00:13:47.108 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:47.108 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:47.108 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4151357' 00:13:47.108 killing process with pid 4151357 00:13:47.108 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 4151357 00:13:47.108 22:30:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 4151357 00:13:47.108 22:30:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:47.108 22:30:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:47.108 22:30:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:47.108 22:30:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.108 22:30:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:47.108 22:30:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.108 22:30:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.108 22:30:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.646 22:30:13 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:49.646 00:13:49.646 real 0m41.947s 00:13:49.646 user 1m5.556s 00:13:49.646 sys 0m9.851s 00:13:49.646 22:30:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.646 22:30:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:49.646 ************************************ 00:13:49.646 END TEST nvmf_lvs_grow 00:13:49.646 ************************************ 00:13:49.646 22:30:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:49.646 22:30:13 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:49.646 22:30:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:49.646 22:30:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.646 22:30:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.646 ************************************ 00:13:49.646 START TEST nvmf_bdev_io_wait 00:13:49.646 ************************************ 00:13:49.646 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:49.646 * Looking for test storage... 00:13:49.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.646 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.646 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:49.646 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.646 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:49.647 22:30:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:54.921 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:54.921 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:54.921 Found net devices under 0000:86:00.0: cvl_0_0 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:54.921 Found net devices under 0000:86:00.1: cvl_0_1 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.921 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:54.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:13:54.922 00:13:54.922 --- 10.0.0.2 ping statistics --- 00:13:54.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.922 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:13:54.922 00:13:54.922 --- 10.0.0.1 ping statistics --- 00:13:54.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.922 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4155836 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4155836 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 4155836 ']' 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:54.922 22:30:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:54.922 [2024-07-15 22:30:18.669430] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:13:54.922 [2024-07-15 22:30:18.669473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.922 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.922 [2024-07-15 22:30:18.730105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.922 [2024-07-15 22:30:18.809045] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.922 [2024-07-15 22:30:18.809086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.922 [2024-07-15 22:30:18.809093] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.922 [2024-07-15 22:30:18.809098] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.922 [2024-07-15 22:30:18.809103] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.922 [2024-07-15 22:30:18.809148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.922 [2024-07-15 22:30:18.809175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.922 [2024-07-15 22:30:18.809264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.922 [2024-07-15 22:30:18.809266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.858 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:55.858 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:55.858 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:55.858 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:55.858 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.858 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.858 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:55.858 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.858 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.858 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.858 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:55.858 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.859 [2024-07-15 22:30:19.587405] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.859 Malloc0 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.859 [2024-07-15 22:30:19.647164] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4156038 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4156040 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:55.859 { 00:13:55.859 "params": { 00:13:55.859 "name": "Nvme$subsystem", 00:13:55.859 "trtype": "$TEST_TRANSPORT", 00:13:55.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.859 "adrfam": "ipv4", 00:13:55.859 "trsvcid": "$NVMF_PORT", 00:13:55.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.859 "hdgst": ${hdgst:-false}, 00:13:55.859 "ddgst": ${ddgst:-false} 00:13:55.859 }, 00:13:55.859 "method": "bdev_nvme_attach_controller" 00:13:55.859 } 00:13:55.859 EOF 00:13:55.859 )") 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4156042 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:55.859 { 00:13:55.859 "params": { 00:13:55.859 "name": "Nvme$subsystem", 00:13:55.859 "trtype": "$TEST_TRANSPORT", 00:13:55.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.859 "adrfam": "ipv4", 00:13:55.859 "trsvcid": "$NVMF_PORT", 00:13:55.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.859 "hdgst": ${hdgst:-false}, 00:13:55.859 "ddgst": ${ddgst:-false} 00:13:55.859 }, 00:13:55.859 "method": "bdev_nvme_attach_controller" 00:13:55.859 } 00:13:55.859 EOF 00:13:55.859 )") 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4156045 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:55.859 { 00:13:55.859 "params": { 00:13:55.859 "name": "Nvme$subsystem", 00:13:55.859 "trtype": "$TEST_TRANSPORT", 00:13:55.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.859 "adrfam": "ipv4", 00:13:55.859 "trsvcid": "$NVMF_PORT", 00:13:55.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.859 "hdgst": ${hdgst:-false}, 00:13:55.859 "ddgst": ${ddgst:-false} 00:13:55.859 }, 00:13:55.859 "method": "bdev_nvme_attach_controller" 00:13:55.859 } 00:13:55.859 EOF 00:13:55.859 )") 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:55.859 { 00:13:55.859 "params": { 00:13:55.859 "name": "Nvme$subsystem", 00:13:55.859 "trtype": "$TEST_TRANSPORT", 00:13:55.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.859 "adrfam": "ipv4", 00:13:55.859 "trsvcid": "$NVMF_PORT", 00:13:55.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.859 "hdgst": ${hdgst:-false}, 00:13:55.859 "ddgst": ${ddgst:-false} 00:13:55.859 }, 00:13:55.859 "method": "bdev_nvme_attach_controller" 00:13:55.859 } 00:13:55.859 EOF 00:13:55.859 )") 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4156038 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:55.859 "params": { 00:13:55.859 "name": "Nvme1", 00:13:55.859 "trtype": "tcp", 00:13:55.859 "traddr": "10.0.0.2", 00:13:55.859 "adrfam": "ipv4", 00:13:55.859 "trsvcid": "4420", 00:13:55.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.859 "hdgst": false, 00:13:55.859 "ddgst": false 00:13:55.859 }, 00:13:55.859 "method": "bdev_nvme_attach_controller" 00:13:55.859 }' 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:55.859 "params": { 00:13:55.859 "name": "Nvme1", 00:13:55.859 "trtype": "tcp", 00:13:55.859 "traddr": "10.0.0.2", 00:13:55.859 "adrfam": "ipv4", 00:13:55.859 "trsvcid": "4420", 00:13:55.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.859 "hdgst": false, 00:13:55.859 "ddgst": false 00:13:55.859 }, 00:13:55.859 "method": "bdev_nvme_attach_controller" 00:13:55.859 }' 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:55.859 "params": { 00:13:55.859 "name": "Nvme1", 00:13:55.859 "trtype": "tcp", 00:13:55.859 "traddr": "10.0.0.2", 00:13:55.859 "adrfam": "ipv4", 00:13:55.859 "trsvcid": "4420", 00:13:55.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.859 "hdgst": false, 00:13:55.859 "ddgst": false 00:13:55.859 }, 00:13:55.859 "method": "bdev_nvme_attach_controller" 00:13:55.859 }' 00:13:55.859 22:30:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:55.859 "params": { 00:13:55.860 "name": "Nvme1", 00:13:55.860 "trtype": "tcp", 00:13:55.860 "traddr": "10.0.0.2", 00:13:55.860 "adrfam": "ipv4", 00:13:55.860 "trsvcid": "4420", 00:13:55.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.860 "hdgst": false, 00:13:55.860 "ddgst": false 00:13:55.860 }, 00:13:55.860 "method": "bdev_nvme_attach_controller" 00:13:55.860 }' 00:13:55.860 [2024-07-15 22:30:19.696614] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:13:55.860 [2024-07-15 22:30:19.696666] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:55.860 [2024-07-15 22:30:19.699844] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:13:55.860 [2024-07-15 22:30:19.699882] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:55.860 [2024-07-15 22:30:19.699992] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:13:55.860 [2024-07-15 22:30:19.700027] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:55.860 [2024-07-15 22:30:19.701196] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:13:55.860 [2024-07-15 22:30:19.701253] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:55.860 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.119 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.119 [2024-07-15 22:30:19.883142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.119 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.119 [2024-07-15 22:30:19.959102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:56.119 [2024-07-15 22:30:19.976195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.119 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.119 [2024-07-15 22:30:20.056563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:56.119 [2024-07-15 22:30:20.069000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.379 [2024-07-15 22:30:20.130436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.379 [2024-07-15 22:30:20.147197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:56.379 [2024-07-15 22:30:20.206853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:56.379 Running I/O for 1 seconds... 00:13:56.379 Running I/O for 1 seconds... 00:13:56.640 Running I/O for 1 seconds... 00:13:56.640 Running I/O for 1 seconds... 00:13:57.579 00:13:57.579 Latency(us) 00:13:57.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.579 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:57.579 Nvme1n1 : 1.01 8299.96 32.42 0.00 0.00 15254.88 6154.69 26100.42 00:13:57.579 =================================================================================================================== 00:13:57.579 Total : 8299.96 32.42 0.00 0.00 15254.88 6154.69 26100.42 00:13:57.579 00:13:57.579 Latency(us) 00:13:57.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.579 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:57.579 Nvme1n1 : 1.01 10975.76 42.87 0.00 0.00 11613.84 7864.32 23365.01 00:13:57.579 =================================================================================================================== 00:13:57.579 Total : 10975.76 42.87 0.00 0.00 11613.84 7864.32 23365.01 00:13:57.579 00:13:57.579 Latency(us) 00:13:57.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.579 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:57.579 Nvme1n1 : 1.00 9086.54 35.49 0.00 0.00 14062.24 3519.00 38067.87 00:13:57.579 =================================================================================================================== 00:13:57.579 Total : 9086.54 35.49 0.00 0.00 14062.24 3519.00 38067.87 00:13:57.579 00:13:57.579 Latency(us) 00:13:57.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.579 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:57.579 Nvme1n1 : 1.00 243478.75 951.09 0.00 0.00 524.08 219.05 641.11 00:13:57.579 =================================================================================================================== 00:13:57.579 Total : 243478.75 951.09 0.00 0.00 524.08 219.05 641.11 00:13:57.579 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4156040 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4156042 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4156045 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:57.838 rmmod nvme_tcp 00:13:57.838 rmmod nvme_fabrics 00:13:57.838 rmmod nvme_keyring 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4155836 ']' 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4155836 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 4155836 ']' 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 4155836 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4155836 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4155836' 00:13:57.838 killing process with pid 4155836 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 4155836 00:13:57.838 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 4155836 00:13:58.097 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:58.097 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:58.097 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:58.097 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.097 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.097 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.097 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.097 22:30:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.632 22:30:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.632 00:14:00.632 real 0m10.844s 00:14:00.632 user 0m19.740s 00:14:00.632 sys 0m5.604s 00:14:00.632 22:30:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.632 22:30:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:00.632 ************************************ 00:14:00.632 END TEST nvmf_bdev_io_wait 00:14:00.632 ************************************ 00:14:00.632 22:30:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:00.632 22:30:24 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:00.632 22:30:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:00.632 22:30:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.632 22:30:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:00.632 ************************************ 00:14:00.632 START TEST nvmf_queue_depth 00:14:00.632 ************************************ 00:14:00.632 22:30:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:00.632 * Looking for test storage... 00:14:00.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:00.632 22:30:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.632 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:00.632 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.632 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.632 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.632 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.632 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.632 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.632 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.632 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.632 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.633 22:30:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.907 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:05.908 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:05.908 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:05.908 Found net devices under 0000:86:00.0: cvl_0_0 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:05.908 Found net devices under 0000:86:00.1: cvl_0_1 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:05.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:14:05.908 00:14:05.908 --- 10.0.0.2 ping statistics --- 00:14:05.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.908 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:14:05.908 00:14:05.908 --- 10.0.0.1 ping statistics --- 00:14:05.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.908 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=4159818 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 4159818 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 4159818 ']' 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.908 22:30:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:05.908 [2024-07-15 22:30:29.572785] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:14:05.908 [2024-07-15 22:30:29.572832] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.908 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.908 [2024-07-15 22:30:29.631034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.908 [2024-07-15 22:30:29.710025] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.908 [2024-07-15 22:30:29.710059] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.908 [2024-07-15 22:30:29.710068] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.908 [2024-07-15 22:30:29.710074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.908 [2024-07-15 22:30:29.710079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.908 [2024-07-15 22:30:29.710096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:06.475 [2024-07-15 22:30:30.413407] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.475 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:06.753 Malloc0 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:06.753 [2024-07-15 22:30:30.474276] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4160060 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4160060 /var/tmp/bdevperf.sock 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 4160060 ']' 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.753 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.754 22:30:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:06.754 [2024-07-15 22:30:30.519679] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:14:06.754 [2024-07-15 22:30:30.519722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4160060 ] 00:14:06.754 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.754 [2024-07-15 22:30:30.574035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.754 [2024-07-15 22:30:30.652771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.363 22:30:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.363 22:30:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:07.363 22:30:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:07.363 22:30:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.363 22:30:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:07.621 NVMe0n1 00:14:07.621 22:30:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.621 22:30:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:07.621 Running I/O for 10 seconds... 00:14:19.826 00:14:19.826 Latency(us) 00:14:19.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.826 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:19.826 Verification LBA range: start 0x0 length 0x4000 00:14:19.826 NVMe0n1 : 10.05 12326.78 48.15 0.00 0.00 82817.56 13392.14 55848.07 00:14:19.826 =================================================================================================================== 00:14:19.826 Total : 12326.78 48.15 0.00 0.00 82817.56 13392.14 55848.07 00:14:19.826 0 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4160060 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 4160060 ']' 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 4160060 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4160060 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4160060' 00:14:19.826 killing process with pid 4160060 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 4160060 00:14:19.826 Received shutdown signal, test time was about 10.000000 seconds 00:14:19.826 00:14:19.826 Latency(us) 00:14:19.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.826 =================================================================================================================== 00:14:19.826 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 4160060 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:19.826 rmmod nvme_tcp 00:14:19.826 rmmod nvme_fabrics 00:14:19.826 rmmod nvme_keyring 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 4159818 ']' 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 4159818 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 4159818 ']' 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 4159818 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4159818 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4159818' 00:14:19.826 killing process with pid 4159818 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 4159818 00:14:19.826 22:30:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 4159818 00:14:19.826 22:30:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.826 22:30:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.826 22:30:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.826 22:30:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.826 22:30:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.826 22:30:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.826 22:30:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.826 22:30:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.394 22:30:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:20.394 00:14:20.394 real 0m20.130s 00:14:20.394 user 0m24.804s 00:14:20.394 sys 0m5.548s 00:14:20.394 22:30:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:20.394 22:30:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:20.394 ************************************ 00:14:20.394 END TEST nvmf_queue_depth 00:14:20.394 ************************************ 00:14:20.394 22:30:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:20.394 22:30:44 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:20.394 22:30:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:20.394 22:30:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.394 22:30:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:20.394 ************************************ 00:14:20.394 START TEST nvmf_target_multipath 00:14:20.394 ************************************ 00:14:20.394 22:30:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:20.654 * Looking for test storage... 00:14:20.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:20.654 22:30:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:20.654 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:20.654 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.654 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.654 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.654 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.654 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.654 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.654 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.654 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.654 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:20.655 22:30:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:25.933 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:25.933 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:25.933 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:25.934 Found net devices under 0000:86:00.0: cvl_0_0 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:25.934 Found net devices under 0000:86:00.1: cvl_0_1 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:25.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:14:25.934 00:14:25.934 --- 10.0.0.2 ping statistics --- 00:14:25.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.934 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:25.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:14:25.934 00:14:25.934 --- 10.0.0.1 ping statistics --- 00:14:25.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.934 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:25.934 only one NIC for nvmf test 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:25.934 rmmod nvme_tcp 00:14:25.934 rmmod nvme_fabrics 00:14:25.934 rmmod nvme_keyring 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.934 22:30:49 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:27.841 00:14:27.841 real 0m7.450s 00:14:27.841 user 0m1.425s 00:14:27.841 sys 0m3.989s 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:27.841 22:30:51 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:27.841 ************************************ 00:14:27.841 END TEST nvmf_target_multipath 00:14:27.841 ************************************ 00:14:27.841 22:30:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:27.841 22:30:51 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:27.841 22:30:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:27.841 22:30:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.841 22:30:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:28.100 ************************************ 00:14:28.101 START TEST nvmf_zcopy 00:14:28.101 ************************************ 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:28.101 * Looking for test storage... 00:14:28.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:28.101 22:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:33.374 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:33.374 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:33.374 Found net devices under 0000:86:00.0: cvl_0_0 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:33.374 Found net devices under 0000:86:00.1: cvl_0_1 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:33.374 22:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:33.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:14:33.374 00:14:33.374 --- 10.0.0.2 ping statistics --- 00:14:33.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.374 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:33.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:14:33.374 00:14:33.374 --- 10.0.0.1 ping statistics --- 00:14:33.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.374 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=4168721 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 4168721 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 4168721 ']' 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.374 22:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:33.374 [2024-07-15 22:30:57.214378] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:14:33.375 [2024-07-15 22:30:57.214420] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.375 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.375 [2024-07-15 22:30:57.269648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.634 [2024-07-15 22:30:57.348294] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.634 [2024-07-15 22:30:57.348328] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.634 [2024-07-15 22:30:57.348335] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:33.634 [2024-07-15 22:30:57.348341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:33.634 [2024-07-15 22:30:57.348347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.634 [2024-07-15 22:30:57.348363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.204 [2024-07-15 22:30:58.051397] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.204 [2024-07-15 22:30:58.071528] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.204 malloc0 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:34.204 { 00:14:34.204 "params": { 00:14:34.204 "name": "Nvme$subsystem", 00:14:34.204 "trtype": "$TEST_TRANSPORT", 00:14:34.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:34.204 "adrfam": "ipv4", 00:14:34.204 "trsvcid": "$NVMF_PORT", 00:14:34.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:34.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:34.204 "hdgst": ${hdgst:-false}, 00:14:34.204 "ddgst": ${ddgst:-false} 00:14:34.204 }, 00:14:34.204 "method": "bdev_nvme_attach_controller" 00:14:34.204 } 00:14:34.204 EOF 00:14:34.204 )") 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:34.204 22:30:58 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:34.204 "params": { 00:14:34.204 "name": "Nvme1", 00:14:34.204 "trtype": "tcp", 00:14:34.204 "traddr": "10.0.0.2", 00:14:34.204 "adrfam": "ipv4", 00:14:34.204 "trsvcid": "4420", 00:14:34.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:34.204 "hdgst": false, 00:14:34.204 "ddgst": false 00:14:34.204 }, 00:14:34.204 "method": "bdev_nvme_attach_controller" 00:14:34.204 }' 00:14:34.204 [2024-07-15 22:30:58.151772] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:14:34.204 [2024-07-15 22:30:58.151813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4168964 ] 00:14:34.463 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.463 [2024-07-15 22:30:58.206115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.463 [2024-07-15 22:30:58.279657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.721 Running I/O for 10 seconds... 00:14:44.782 00:14:44.782 Latency(us) 00:14:44.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.782 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:44.782 Verification LBA range: start 0x0 length 0x1000 00:14:44.782 Nvme1n1 : 10.01 8681.07 67.82 0.00 0.00 14702.41 2635.69 26898.25 00:14:44.782 =================================================================================================================== 00:14:44.782 Total : 8681.07 67.82 0.00 0.00 14702.41 2635.69 26898.25 00:14:45.042 22:31:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4170576 00:14:45.042 22:31:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:45.042 22:31:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:45.042 22:31:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:45.042 22:31:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:45.042 22:31:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:45.042 22:31:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:45.042 22:31:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:45.042 22:31:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:45.042 { 00:14:45.042 "params": { 00:14:45.042 "name": "Nvme$subsystem", 00:14:45.043 "trtype": "$TEST_TRANSPORT", 00:14:45.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:45.043 "adrfam": "ipv4", 00:14:45.043 "trsvcid": "$NVMF_PORT", 00:14:45.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:45.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:45.043 "hdgst": ${hdgst:-false}, 00:14:45.043 "ddgst": ${ddgst:-false} 00:14:45.043 }, 00:14:45.043 "method": "bdev_nvme_attach_controller" 00:14:45.043 } 00:14:45.043 EOF 00:14:45.043 )") 00:14:45.043 22:31:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:45.043 [2024-07-15 22:31:08.826289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.826319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 22:31:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:45.043 22:31:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:45.043 22:31:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:45.043 "params": { 00:14:45.043 "name": "Nvme1", 00:14:45.043 "trtype": "tcp", 00:14:45.043 "traddr": "10.0.0.2", 00:14:45.043 "adrfam": "ipv4", 00:14:45.043 "trsvcid": "4420", 00:14:45.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.043 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:45.043 "hdgst": false, 00:14:45.043 "ddgst": false 00:14:45.043 }, 00:14:45.043 "method": "bdev_nvme_attach_controller" 00:14:45.043 }' 00:14:45.043 [2024-07-15 22:31:08.838286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.838298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.846305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.846319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.858340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.858349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.861444] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:14:45.043 [2024-07-15 22:31:08.861483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170576 ] 00:14:45.043 [2024-07-15 22:31:08.870371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.870381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.882404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.882413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.043 [2024-07-15 22:31:08.894434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.894443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.906471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.906480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.915019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.043 [2024-07-15 22:31:08.918505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.918514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.930539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.930552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.942570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.942582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.954603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.954622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.966635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.966651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.978666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.978676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.990701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:08.990713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.043 [2024-07-15 22:31:08.991518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.043 [2024-07-15 22:31:09.002735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.043 [2024-07-15 22:31:09.002751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.014771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.014789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.026804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.026816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.038830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.038847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.050862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.050874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.062893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.062904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.074936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.074955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.086964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.086978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.098998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.099013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.111027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.111038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.119045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.119054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.131076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.131086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.139099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.139110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.147123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.147137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.155147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.155161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.163164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.163175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.175202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.175219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 Running I/O for 5 seconds... 00:14:45.303 [2024-07-15 22:31:09.187234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.187246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.199725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.199744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.214070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.214089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.224997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.225016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.239645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.239664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.253502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.253521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.303 [2024-07-15 22:31:09.267558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.303 [2024-07-15 22:31:09.267577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.281896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.281915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.289550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.289569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.303492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.303510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.312440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.312458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.321438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.321456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.336149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.336165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.351499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.351517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.365887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.365904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.377691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.377709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.391900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.391918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.405889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.405907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.419811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.419829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.431769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.431787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.440890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.440908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.449427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.449445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.458771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.458789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.468105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.468121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.482482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.482499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.491349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.491367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.506082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.506099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.517057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.517075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.563 [2024-07-15 22:31:09.526247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.563 [2024-07-15 22:31:09.526265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.540419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.540438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.554350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.554368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.562983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.563000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.577893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.577911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.588653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.588670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.603213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.603236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.613587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.613604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.627801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.627819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.636842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.636859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.645706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.645723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.660560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.660577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.675748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.675765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.689656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.689673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.703562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.703579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.717489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.717506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.729076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.729094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.737974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.737992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.752326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.752347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.765977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.765995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.774746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.774763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:45.822 [2024-07-15 22:31:09.789433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:45.822 [2024-07-15 22:31:09.789451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.803771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.803789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.815212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.815235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.829171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.829189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.838127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.838145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.852831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.852849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.863578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.863595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.878182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.878199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.891812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.891830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.900884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.900901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.915243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.915277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.924146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.924164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.938462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.938484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.947375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.947393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.955993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.956011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.970588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.970606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.984243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.984261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:09.993010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:09.993027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:10.007588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:10.007606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:10.021141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:10.021158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:10.036429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:10.036448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.082 [2024-07-15 22:31:10.051505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.082 [2024-07-15 22:31:10.051522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.065435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.065455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.079355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.079374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.088445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.088463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.102662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.102680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.116204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.116222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.125051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.125068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.134254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.134271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.143471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.143489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.157999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.158017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.166762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.166784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.175602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.175619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.184267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.184284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.198642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.198662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.212634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.212653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.221735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.221752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.230567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.230584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.239168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.239185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.247731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.247749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.262256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.262274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.271267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.271284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.285475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.285493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.299101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.299119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.351 [2024-07-15 22:31:10.308272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.351 [2024-07-15 22:31:10.308289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.322233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.322252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.331377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.331396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.340360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.340378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.348838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.348857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.363302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.363321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.377030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.377052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.391162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.391181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.402134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.402152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.416632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.416651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.427243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.427261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.441441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.441460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.455426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.455444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.466215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.466238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.475105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.475122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.483737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.483754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.498062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.498080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.511973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.511991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.610 [2024-07-15 22:31:10.525953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.610 [2024-07-15 22:31:10.525971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.611 [2024-07-15 22:31:10.540007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.611 [2024-07-15 22:31:10.540025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.611 [2024-07-15 22:31:10.554107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.611 [2024-07-15 22:31:10.554125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.611 [2024-07-15 22:31:10.565326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.611 [2024-07-15 22:31:10.565344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.611 [2024-07-15 22:31:10.579743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.611 [2024-07-15 22:31:10.579762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.593435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.593454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.607058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.607076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.615962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.615984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.630567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.630586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.641531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.641548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.655667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.655685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.669478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.669496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.683926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.683943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.694983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.695001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.709085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.709103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.722759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.722778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.736368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.736387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.745488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.745506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.759824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.759841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.773630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.773647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.784560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.784577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.793409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.793426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.807969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.807986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.822024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.822041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.830961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.830978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:46.869 [2024-07-15 22:31:10.840066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:46.869 [2024-07-15 22:31:10.840083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:10.848706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:10.848724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:10.863158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:10.863175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:10.876218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:10.876240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:10.885335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:10.885353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:10.899885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:10.899903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:10.913497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:10.913515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:10.927308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:10.927326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:10.941148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:10.941165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:10.955648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:10.955666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:10.971376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:10.971394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:10.980394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:10.980411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:10.994862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:10.994879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.128 [2024-07-15 22:31:11.008534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.128 [2024-07-15 22:31:11.008552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.129 [2024-07-15 22:31:11.022296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.129 [2024-07-15 22:31:11.022314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.129 [2024-07-15 22:31:11.035898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.129 [2024-07-15 22:31:11.035915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.129 [2024-07-15 22:31:11.050269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.129 [2024-07-15 22:31:11.050287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.129 [2024-07-15 22:31:11.061174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.129 [2024-07-15 22:31:11.061191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.129 [2024-07-15 22:31:11.075524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.129 [2024-07-15 22:31:11.075542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.129 [2024-07-15 22:31:11.084583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.129 [2024-07-15 22:31:11.084600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.129 [2024-07-15 22:31:11.099245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.129 [2024-07-15 22:31:11.099262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.110367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.110385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.124490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.124508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.138368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.138385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.148735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.148753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.157698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.157715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.166995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.167013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.181738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.181756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.197757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.197775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.212046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.212064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.223600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.223619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.237690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.237708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.251921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.251939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.267603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.267621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.281403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.281420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.295346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.295364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.304099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.304116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.318781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.318798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.332729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.332747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.341503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.341520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.388 [2024-07-15 22:31:11.350783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.388 [2024-07-15 22:31:11.350801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.365340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.365359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.378692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.378710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.393092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.393109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.407018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.407036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.420809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.420827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.429772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.429790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.438837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.438856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.453502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.453521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.467761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.467778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.481597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.481615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.490592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.490609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.499755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.499772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.514054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.514073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.527421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.527449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.541034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.541051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.555502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.555519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.566190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.566206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.580697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.580714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.591036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.591053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.605013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.605031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.646 [2024-07-15 22:31:11.614142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.646 [2024-07-15 22:31:11.614159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.903 [2024-07-15 22:31:11.622943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.903 [2024-07-15 22:31:11.622961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.903 [2024-07-15 22:31:11.637351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.903 [2024-07-15 22:31:11.637370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.903 [2024-07-15 22:31:11.651321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.651339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.662621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.662638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.676625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.676643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.685648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.685665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.699956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.699973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.713654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.713672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.722492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.722509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.731704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.731721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.741012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.741029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.755417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.755435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.764525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.764543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.773323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.773341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.788192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.788215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.803613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.803633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.817696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.817715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.831804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.831822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.842590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.842609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.852041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.852059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:47.904 [2024-07-15 22:31:11.866776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:47.904 [2024-07-15 22:31:11.866793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:11.877550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:11.877569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:11.892374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:11.892392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:11.907579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:11.907597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:11.921710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:11.921728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:11.935398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:11.935416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:11.949769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:11.949788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:11.963188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:11.963206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:11.972266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:11.972284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:11.986677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:11.986696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:11.995760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:11.995778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:12.010582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:12.010599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:12.026391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:12.026410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:12.035349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:12.035371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:12.049935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:12.049953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:12.059003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:12.059020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:12.073043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:12.073061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:12.086892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:12.086910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:12.095900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:12.095917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:12.104810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:12.104827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:12.119263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:12.119282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.162 [2024-07-15 22:31:12.132803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.162 [2024-07-15 22:31:12.132821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.141628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.141647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.150655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.150672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.159459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.159477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.174277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.174294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.189299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.189316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.203887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.203905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.211046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.211063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.223510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.223527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.232811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.232831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.247395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.247415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.256419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.256441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.265616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.265633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.274360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.274378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.283630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.283648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.297902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.297919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.311735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.311753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.323098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.323115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.331797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.331814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.346842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.346859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.363313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.363331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.373704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.373722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.420 [2024-07-15 22:31:12.388197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.420 [2024-07-15 22:31:12.388215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.678 [2024-07-15 22:31:12.399038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.678 [2024-07-15 22:31:12.399056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.678 [2024-07-15 22:31:12.408039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.678 [2024-07-15 22:31:12.408056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.678 [2024-07-15 22:31:12.422520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.678 [2024-07-15 22:31:12.422538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.678 [2024-07-15 22:31:12.436395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.678 [2024-07-15 22:31:12.436412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.678 [2024-07-15 22:31:12.450092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.678 [2024-07-15 22:31:12.450110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.464148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.464165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.473073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.473091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.487585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.487610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.499173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.499190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.513209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.513232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.527758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.527775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.538667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.538685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.553055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.553073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.566800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.566817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.580915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.580932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.594858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.594876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.606102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.606119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.620312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.620330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.634444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.634472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.679 [2024-07-15 22:31:12.643380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.679 [2024-07-15 22:31:12.643397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.652290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.652308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.661545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.661562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.676143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.676161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.685275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.685292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.693985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.694003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.703334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.703351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.711957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.711974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.726688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.726706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.737364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.737381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.751378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.751395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.760299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.760317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.768627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.768643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.783035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.783053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.797018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.797036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.810754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.810772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.825211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.825235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.835898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.835915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.850247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.850265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.859219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.859243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.868023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.868040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.877414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.877431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.885967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.885984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:48.938 [2024-07-15 22:31:12.900390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:48.938 [2024-07-15 22:31:12.900408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.196 [2024-07-15 22:31:12.914398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.196 [2024-07-15 22:31:12.914416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.196 [2024-07-15 22:31:12.923304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.196 [2024-07-15 22:31:12.923321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.196 [2024-07-15 22:31:12.932260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.196 [2024-07-15 22:31:12.932278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.196 [2024-07-15 22:31:12.941268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.196 [2024-07-15 22:31:12.941285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.196 [2024-07-15 22:31:12.955790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.196 [2024-07-15 22:31:12.955807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:12.970218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:12.970243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:12.981075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:12.981091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:12.990119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:12.990136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:12.999071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:12.999088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.013902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.013918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.029212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.029235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.043172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.043189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.052308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.052326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.061181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.061198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.076131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.076148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.092038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.092055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.101038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.101055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.115755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.115773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.126466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.126484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.141474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.141493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.152285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.152303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.197 [2024-07-15 22:31:13.166340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.197 [2024-07-15 22:31:13.166359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.180194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.180214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.189158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.189178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.203797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.203816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.217817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.217835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.229150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.229169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.237999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.238017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.252733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.252754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.263277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.263297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.277640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.277658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.286771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.286789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.301099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.301117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.310051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.310069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.324661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.324680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.335755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.335774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.345130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.345148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.359666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.359684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.373166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.373184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.387244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.387264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.396549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.396567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.410522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.410539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.455 [2024-07-15 22:31:13.424531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.455 [2024-07-15 22:31:13.424551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.713 [2024-07-15 22:31:13.433288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.713 [2024-07-15 22:31:13.433307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.713 [2024-07-15 22:31:13.447757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.713 [2024-07-15 22:31:13.447776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.713 [2024-07-15 22:31:13.461564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.461582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.475040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.475058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.483785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.483802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.492639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.492656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.506960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.506978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.515953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.515970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.530248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.530266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.539115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.539133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.548439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.548457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.563277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.563295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.573946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.573963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.587887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.587904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.601706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.601723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.610409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.610430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.619933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.619950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.628472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.628489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.637994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.638011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.647209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.647234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.656511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.656528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.670908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.670926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.714 [2024-07-15 22:31:13.683657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.714 [2024-07-15 22:31:13.683674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.973 [2024-07-15 22:31:13.698021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.973 [2024-07-15 22:31:13.698039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.973 [2024-07-15 22:31:13.707122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.973 [2024-07-15 22:31:13.707139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.973 [2024-07-15 22:31:13.721009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.973 [2024-07-15 22:31:13.721027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.973 [2024-07-15 22:31:13.734864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.973 [2024-07-15 22:31:13.734882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.973 [2024-07-15 22:31:13.748614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.973 [2024-07-15 22:31:13.748633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.973 [2024-07-15 22:31:13.757655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.973 [2024-07-15 22:31:13.757672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.973 [2024-07-15 22:31:13.772281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.973 [2024-07-15 22:31:13.772299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.973 [2024-07-15 22:31:13.782948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.973 [2024-07-15 22:31:13.782966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.973 [2024-07-15 22:31:13.797477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.973 [2024-07-15 22:31:13.797494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.973 [2024-07-15 22:31:13.811635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.973 [2024-07-15 22:31:13.811652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.973 [2024-07-15 22:31:13.827048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.973 [2024-07-15 22:31:13.827065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.974 [2024-07-15 22:31:13.841192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.974 [2024-07-15 22:31:13.841213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.974 [2024-07-15 22:31:13.850390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.974 [2024-07-15 22:31:13.850408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.974 [2024-07-15 22:31:13.864829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.974 [2024-07-15 22:31:13.864847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.974 [2024-07-15 22:31:13.871929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.974 [2024-07-15 22:31:13.871946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.974 [2024-07-15 22:31:13.885082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.974 [2024-07-15 22:31:13.885099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.974 [2024-07-15 22:31:13.899172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.974 [2024-07-15 22:31:13.899189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.974 [2024-07-15 22:31:13.913498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.974 [2024-07-15 22:31:13.913515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.974 [2024-07-15 22:31:13.929488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.974 [2024-07-15 22:31:13.929506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.974 [2024-07-15 22:31:13.943818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:49.974 [2024-07-15 22:31:13.943835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:13.954734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:13.954753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:13.969173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:13.969191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:13.982976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:13.982993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:13.996905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:13.996923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.010640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.010657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.019409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.019426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.034380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.034398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.050312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.050331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.064518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.064536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.073395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.073412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.081953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.081974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.091380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.091398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.100671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.100688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.114917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.114934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.128347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.128364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.142392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.142410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.156085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.156102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.165192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.165209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.179396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.179414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.233 [2024-07-15 22:31:14.193646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.233 [2024-07-15 22:31:14.193664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.492 [2024-07-15 22:31:14.204851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.492 [2024-07-15 22:31:14.204870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.492 00:14:50.492 Latency(us) 00:14:50.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.492 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:50.492 Nvme1n1 : 5.01 16704.56 130.50 0.00 0.00 7654.53 3063.10 19831.76 00:14:50.492 =================================================================================================================== 00:14:50.492 Total : 16704.56 130.50 0.00 0.00 7654.53 3063.10 19831.76 00:14:50.492 [2024-07-15 22:31:14.215790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.492 [2024-07-15 22:31:14.215804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.492 [2024-07-15 22:31:14.227823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.492 [2024-07-15 22:31:14.227834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.492 [2024-07-15 22:31:14.239857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.492 [2024-07-15 22:31:14.239876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.492 [2024-07-15 22:31:14.251884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.492 [2024-07-15 22:31:14.251898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.492 [2024-07-15 22:31:14.263916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.492 [2024-07-15 22:31:14.263931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.492 [2024-07-15 22:31:14.275946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.492 [2024-07-15 22:31:14.275968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.492 [2024-07-15 22:31:14.287988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.492 [2024-07-15 22:31:14.288006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.492 [2024-07-15 22:31:14.308040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.492 [2024-07-15 22:31:14.308059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.492 [2024-07-15 22:31:14.320065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.492 [2024-07-15 22:31:14.320073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.492 [2024-07-15 22:31:14.332101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.492 [2024-07-15 22:31:14.332112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.492 [2024-07-15 22:31:14.344135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.492 [2024-07-15 22:31:14.344148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.493 [2024-07-15 22:31:14.356159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.493 [2024-07-15 22:31:14.356168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.493 [2024-07-15 22:31:14.368195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.493 [2024-07-15 22:31:14.368206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.493 [2024-07-15 22:31:14.380230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.493 [2024-07-15 22:31:14.380241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.493 [2024-07-15 22:31:14.392265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:50.493 [2024-07-15 22:31:14.392274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:50.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4170576) - No such process 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4170576 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:50.493 delay0 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.493 22:31:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:50.493 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.752 [2024-07-15 22:31:14.522632] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:57.322 Initializing NVMe Controllers 00:14:57.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:57.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:57.322 Initialization complete. Launching workers. 00:14:57.322 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 250 00:14:57.322 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 537, failed to submit 33 00:14:57.322 success 335, unsuccess 202, failed 0 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:57.322 rmmod nvme_tcp 00:14:57.322 rmmod nvme_fabrics 00:14:57.322 rmmod nvme_keyring 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 4168721 ']' 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 4168721 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 4168721 ']' 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 4168721 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4168721 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4168721' 00:14:57.322 killing process with pid 4168721 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 4168721 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 4168721 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.322 22:31:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.229 22:31:23 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:59.229 00:14:59.229 real 0m31.239s 00:14:59.229 user 0m42.782s 00:14:59.229 sys 0m10.356s 00:14:59.229 22:31:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:59.229 22:31:23 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:59.229 ************************************ 00:14:59.229 END TEST nvmf_zcopy 00:14:59.229 ************************************ 00:14:59.229 22:31:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:59.229 22:31:23 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:59.229 22:31:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:59.229 22:31:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.229 22:31:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:59.229 ************************************ 00:14:59.229 START TEST nvmf_nmic 00:14:59.229 ************************************ 00:14:59.229 22:31:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:59.487 * Looking for test storage... 00:14:59.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:59.487 22:31:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:59.487 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:59.487 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.487 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.487 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:59.488 22:31:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:04.837 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.837 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:04.838 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:04.838 Found net devices under 0000:86:00.0: cvl_0_0 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:04.838 Found net devices under 0000:86:00.1: cvl_0_1 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:04.838 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:05.097 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:05.097 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:05.097 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:05.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:15:05.097 00:15:05.097 --- 10.0.0.2 ping statistics --- 00:15:05.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.097 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:15:05.097 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:15:05.097 00:15:05.097 --- 10.0.0.1 ping statistics --- 00:15:05.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.097 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:15:05.097 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.097 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:05.097 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=4176150 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 4176150 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 4176150 ']' 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:05.098 22:31:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:05.098 [2024-07-15 22:31:28.929025] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:05.098 [2024-07-15 22:31:28.929065] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.098 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.098 [2024-07-15 22:31:28.989721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:05.357 [2024-07-15 22:31:29.070588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.357 [2024-07-15 22:31:29.070629] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.357 [2024-07-15 22:31:29.070636] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.357 [2024-07-15 22:31:29.070642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.357 [2024-07-15 22:31:29.070647] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.357 [2024-07-15 22:31:29.070905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.357 [2024-07-15 22:31:29.070922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:05.357 [2024-07-15 22:31:29.071013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:05.357 [2024-07-15 22:31:29.071014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:05.925 [2024-07-15 22:31:29.777085] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:05.925 Malloc0 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:05.925 [2024-07-15 22:31:29.828968] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:05.925 test case1: single bdev can't be used in multiple subsystems 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:05.925 [2024-07-15 22:31:29.852890] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:05.925 [2024-07-15 22:31:29.852909] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:05.925 [2024-07-15 22:31:29.852917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:05.925 request: 00:15:05.925 { 00:15:05.925 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:05.925 "namespace": { 00:15:05.925 "bdev_name": "Malloc0", 00:15:05.925 "no_auto_visible": false 00:15:05.925 }, 00:15:05.925 "method": "nvmf_subsystem_add_ns", 00:15:05.925 "req_id": 1 00:15:05.925 } 00:15:05.925 Got JSON-RPC error response 00:15:05.925 response: 00:15:05.925 { 00:15:05.925 "code": -32602, 00:15:05.925 "message": "Invalid parameters" 00:15:05.925 } 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:05.925 Adding namespace failed - expected result. 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:05.925 test case2: host connect to nvmf target in multiple paths 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:05.925 [2024-07-15 22:31:29.865024] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.925 22:31:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:07.303 22:31:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:08.238 22:31:32 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.238 22:31:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:08.238 22:31:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.238 22:31:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:08.238 22:31:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:10.134 22:31:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:10.134 22:31:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:10.134 22:31:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.134 22:31:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:10.134 22:31:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.134 22:31:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:10.134 22:31:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:10.134 [global] 00:15:10.134 thread=1 00:15:10.134 invalidate=1 00:15:10.134 rw=write 00:15:10.134 time_based=1 00:15:10.134 runtime=1 00:15:10.134 ioengine=libaio 00:15:10.134 direct=1 00:15:10.134 bs=4096 00:15:10.134 iodepth=1 00:15:10.134 norandommap=0 00:15:10.134 numjobs=1 00:15:10.134 00:15:10.390 verify_dump=1 00:15:10.390 verify_backlog=512 00:15:10.390 verify_state_save=0 00:15:10.390 do_verify=1 00:15:10.390 verify=crc32c-intel 00:15:10.390 [job0] 00:15:10.390 filename=/dev/nvme0n1 00:15:10.390 Could not set queue depth (nvme0n1) 00:15:10.648 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:10.648 fio-3.35 00:15:10.648 Starting 1 thread 00:15:11.584 00:15:11.584 job0: (groupid=0, jobs=1): err= 0: pid=4177125: Mon Jul 15 22:31:35 2024 00:15:11.584 read: IOPS=1658, BW=6633KiB/s (6793kB/s)(6640KiB/1001msec) 00:15:11.584 slat (nsec): min=7022, max=36702, avg=7860.74, stdev=1566.14 00:15:11.584 clat (usec): min=287, max=444, avg=335.76, stdev=22.09 00:15:11.584 lat (usec): min=294, max=451, avg=343.62, stdev=22.08 00:15:11.584 clat percentiles (usec): 00:15:11.584 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:15:11.584 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 347], 00:15:11.584 | 70.00th=[ 355], 80.00th=[ 359], 90.00th=[ 363], 95.00th=[ 367], 00:15:11.584 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 433], 99.95th=[ 445], 00:15:11.584 | 99.99th=[ 445] 00:15:11.584 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:11.584 slat (nsec): min=10111, max=40261, avg=11363.46, stdev=1808.33 00:15:11.584 clat (usec): min=166, max=385, avg=193.06, stdev= 8.38 00:15:11.584 lat (usec): min=181, max=424, avg=204.42, stdev= 8.68 00:15:11.584 clat percentiles (usec): 00:15:11.584 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 188], 00:15:11.584 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 192], 60.00th=[ 194], 00:15:11.584 | 70.00th=[ 196], 80.00th=[ 198], 90.00th=[ 200], 95.00th=[ 204], 00:15:11.584 | 99.00th=[ 227], 99.50th=[ 233], 99.90th=[ 241], 99.95th=[ 253], 00:15:11.584 | 99.99th=[ 388] 00:15:11.584 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:15:11.584 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:11.584 lat (usec) : 250=55.18%, 500=44.82% 00:15:11.584 cpu : usr=4.00%, sys=4.90%, ctx=3708, majf=0, minf=2 00:15:11.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:11.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.584 issued rwts: total=1660,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:11.584 00:15:11.584 Run status group 0 (all jobs): 00:15:11.584 READ: bw=6633KiB/s (6793kB/s), 6633KiB/s-6633KiB/s (6793kB/s-6793kB/s), io=6640KiB (6799kB), run=1001-1001msec 00:15:11.584 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:15:11.584 00:15:11.584 Disk stats (read/write): 00:15:11.584 nvme0n1: ios=1586/1754, merge=0/0, ticks=529/323, in_queue=852, util=91.58% 00:15:11.584 22:31:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:11.843 rmmod nvme_tcp 00:15:11.843 rmmod nvme_fabrics 00:15:11.843 rmmod nvme_keyring 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 4176150 ']' 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 4176150 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 4176150 ']' 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 4176150 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4176150 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4176150' 00:15:11.843 killing process with pid 4176150 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 4176150 00:15:11.843 22:31:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 4176150 00:15:12.102 22:31:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:12.102 22:31:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:12.102 22:31:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:12.102 22:31:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:12.102 22:31:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:12.102 22:31:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.102 22:31:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:12.102 22:31:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.638 22:31:38 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:14.638 00:15:14.638 real 0m14.941s 00:15:14.638 user 0m34.874s 00:15:14.638 sys 0m4.989s 00:15:14.638 22:31:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:14.638 22:31:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:14.638 ************************************ 00:15:14.638 END TEST nvmf_nmic 00:15:14.638 ************************************ 00:15:14.638 22:31:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:14.638 22:31:38 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:14.638 22:31:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:14.638 22:31:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:14.638 22:31:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:14.638 ************************************ 00:15:14.638 START TEST nvmf_fio_target 00:15:14.638 ************************************ 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:14.638 * Looking for test storage... 00:15:14.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.638 22:31:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.639 22:31:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.639 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:14.639 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:14.639 22:31:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:14.639 22:31:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:19.931 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:19.932 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:19.932 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:19.932 Found net devices under 0000:86:00.0: cvl_0_0 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:19.932 Found net devices under 0000:86:00.1: cvl_0_1 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:19.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:15:19.932 00:15:19.932 --- 10.0.0.2 ping statistics --- 00:15:19.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.932 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:15:19.932 00:15:19.932 --- 10.0.0.1 ping statistics --- 00:15:19.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.932 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=4180753 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 4180753 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 4180753 ']' 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.932 22:31:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.932 [2024-07-15 22:31:43.557352] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:19.932 [2024-07-15 22:31:43.557396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.932 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.932 [2024-07-15 22:31:43.615565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:19.932 [2024-07-15 22:31:43.696789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.932 [2024-07-15 22:31:43.696823] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.932 [2024-07-15 22:31:43.696830] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.932 [2024-07-15 22:31:43.696836] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.932 [2024-07-15 22:31:43.696842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.932 [2024-07-15 22:31:43.696903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.932 [2024-07-15 22:31:43.697001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.932 [2024-07-15 22:31:43.697019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.932 [2024-07-15 22:31:43.697020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.500 22:31:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.500 22:31:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:15:20.500 22:31:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:20.500 22:31:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:20.500 22:31:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.500 22:31:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.500 22:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:20.758 [2024-07-15 22:31:44.568673] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.758 22:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.017 22:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:21.017 22:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.017 22:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:21.017 22:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.276 22:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:21.276 22:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:21.535 22:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:21.535 22:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:21.794 22:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:22.053 22:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:22.053 22:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:22.053 22:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:22.053 22:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:22.312 22:31:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:22.312 22:31:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:22.571 22:31:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:22.571 22:31:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:22.571 22:31:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:22.830 22:31:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:22.830 22:31:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:23.151 22:31:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.151 [2024-07-15 22:31:47.058855] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.151 22:31:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:23.409 22:31:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:23.666 22:31:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:24.602 22:31:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:24.602 22:31:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:24.602 22:31:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:24.602 22:31:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:24.602 22:31:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:24.602 22:31:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:27.132 22:31:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:27.132 22:31:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:27.132 22:31:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:27.132 22:31:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:27.132 22:31:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:27.132 22:31:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:27.132 22:31:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:27.132 [global] 00:15:27.132 thread=1 00:15:27.132 invalidate=1 00:15:27.132 rw=write 00:15:27.132 time_based=1 00:15:27.132 runtime=1 00:15:27.132 ioengine=libaio 00:15:27.132 direct=1 00:15:27.132 bs=4096 00:15:27.132 iodepth=1 00:15:27.132 norandommap=0 00:15:27.132 numjobs=1 00:15:27.132 00:15:27.132 verify_dump=1 00:15:27.132 verify_backlog=512 00:15:27.132 verify_state_save=0 00:15:27.132 do_verify=1 00:15:27.132 verify=crc32c-intel 00:15:27.132 [job0] 00:15:27.132 filename=/dev/nvme0n1 00:15:27.132 [job1] 00:15:27.132 filename=/dev/nvme0n2 00:15:27.132 [job2] 00:15:27.132 filename=/dev/nvme0n3 00:15:27.132 [job3] 00:15:27.132 filename=/dev/nvme0n4 00:15:27.132 Could not set queue depth (nvme0n1) 00:15:27.132 Could not set queue depth (nvme0n2) 00:15:27.132 Could not set queue depth (nvme0n3) 00:15:27.132 Could not set queue depth (nvme0n4) 00:15:27.132 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.132 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.132 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.132 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:27.132 fio-3.35 00:15:27.132 Starting 4 threads 00:15:28.533 00:15:28.533 job0: (groupid=0, jobs=1): err= 0: pid=4182107: Mon Jul 15 22:31:52 2024 00:15:28.533 read: IOPS=513, BW=2055KiB/s (2104kB/s)(2104KiB/1024msec) 00:15:28.533 slat (nsec): min=7105, max=24310, avg=8609.95, stdev=2586.95 00:15:28.533 clat (usec): min=345, max=41457, avg=1475.86, stdev=6542.14 00:15:28.533 lat (usec): min=352, max=41468, avg=1484.47, stdev=6544.28 00:15:28.533 clat percentiles (usec): 00:15:28.533 | 1.00th=[ 351], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 367], 00:15:28.533 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 383], 60.00th=[ 392], 00:15:28.533 | 70.00th=[ 400], 80.00th=[ 420], 90.00th=[ 465], 95.00th=[ 515], 00:15:28.533 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:15:28.533 | 99.99th=[41681] 00:15:28.533 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:15:28.533 slat (nsec): min=10319, max=46270, avg=13447.10, stdev=2529.39 00:15:28.533 clat (usec): min=153, max=500, avg=219.29, stdev=32.71 00:15:28.533 lat (usec): min=166, max=512, avg=232.74, stdev=32.95 00:15:28.533 clat percentiles (usec): 00:15:28.533 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 196], 00:15:28.533 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:15:28.533 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 265], 00:15:28.533 | 99.00th=[ 310], 99.50th=[ 416], 99.90th=[ 498], 99.95th=[ 502], 00:15:28.533 | 99.99th=[ 502] 00:15:28.533 bw ( KiB/s): min= 8192, max= 8192, per=68.93%, avg=8192.00, stdev= 0.00, samples=1 00:15:28.533 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:28.533 lat (usec) : 250=59.23%, 500=38.45%, 750=1.42% 00:15:28.533 lat (msec) : 50=0.90% 00:15:28.533 cpu : usr=1.08%, sys=3.03%, ctx=1550, majf=0, minf=2 00:15:28.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:28.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.533 issued rwts: total=526,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:28.533 job1: (groupid=0, jobs=1): err= 0: pid=4182113: Mon Jul 15 22:31:52 2024 00:15:28.533 read: IOPS=21, BW=85.5KiB/s (87.6kB/s)(88.0KiB/1029msec) 00:15:28.533 slat (nsec): min=9834, max=25323, avg=22461.09, stdev=2992.46 00:15:28.533 clat (usec): min=40623, max=41978, avg=41040.97, stdev=312.42 00:15:28.533 lat (usec): min=40632, max=42001, avg=41063.43, stdev=313.24 00:15:28.533 clat percentiles (usec): 00:15:28.533 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:28.533 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:28.533 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:15:28.533 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:28.533 | 99.99th=[42206] 00:15:28.533 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:15:28.533 slat (nsec): min=10019, max=42256, avg=11863.50, stdev=2238.91 00:15:28.533 clat (usec): min=195, max=354, avg=230.33, stdev=17.49 00:15:28.533 lat (usec): min=206, max=389, avg=242.20, stdev=18.42 00:15:28.533 clat percentiles (usec): 00:15:28.533 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:15:28.533 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 231], 00:15:28.534 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 260], 00:15:28.534 | 99.00th=[ 285], 99.50th=[ 326], 99.90th=[ 355], 99.95th=[ 355], 00:15:28.534 | 99.99th=[ 355] 00:15:28.534 bw ( KiB/s): min= 4096, max= 4096, per=34.47%, avg=4096.00, stdev= 0.00, samples=1 00:15:28.534 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:28.534 lat (usec) : 250=85.96%, 500=9.93% 00:15:28.534 lat (msec) : 50=4.12% 00:15:28.534 cpu : usr=0.39%, sys=0.97%, ctx=534, majf=0, minf=1 00:15:28.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:28.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.534 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:28.534 job2: (groupid=0, jobs=1): err= 0: pid=4182115: Mon Jul 15 22:31:52 2024 00:15:28.534 read: IOPS=409, BW=1636KiB/s (1676kB/s)(1692KiB/1034msec) 00:15:28.534 slat (nsec): min=6092, max=26206, avg=9466.32, stdev=2905.14 00:15:28.534 clat (usec): min=305, max=41505, avg=2129.69, stdev=8215.96 00:15:28.534 lat (usec): min=312, max=41516, avg=2139.15, stdev=8218.11 00:15:28.534 clat percentiles (usec): 00:15:28.534 | 1.00th=[ 310], 5.00th=[ 330], 10.00th=[ 359], 20.00th=[ 367], 00:15:28.534 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 379], 60.00th=[ 383], 00:15:28.534 | 70.00th=[ 388], 80.00th=[ 396], 90.00th=[ 465], 95.00th=[ 523], 00:15:28.534 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:15:28.534 | 99.99th=[41681] 00:15:28.534 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:15:28.534 slat (nsec): min=7104, max=99166, avg=12391.48, stdev=4768.13 00:15:28.534 clat (usec): min=164, max=391, avg=233.80, stdev=31.98 00:15:28.534 lat (usec): min=174, max=415, avg=246.19, stdev=33.04 00:15:28.534 clat percentiles (usec): 00:15:28.534 | 1.00th=[ 180], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 210], 00:15:28.534 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:15:28.534 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 293], 00:15:28.534 | 99.00th=[ 318], 99.50th=[ 375], 99.90th=[ 392], 99.95th=[ 392], 00:15:28.534 | 99.99th=[ 392] 00:15:28.534 bw ( KiB/s): min= 4096, max= 4096, per=34.47%, avg=4096.00, stdev= 0.00, samples=1 00:15:28.534 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:28.534 lat (usec) : 250=42.57%, 500=55.08%, 750=0.32% 00:15:28.534 lat (msec) : 10=0.11%, 50=1.93% 00:15:28.534 cpu : usr=0.87%, sys=0.77%, ctx=935, majf=0, minf=1 00:15:28.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:28.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.534 issued rwts: total=423,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:28.534 job3: (groupid=0, jobs=1): err= 0: pid=4182116: Mon Jul 15 22:31:52 2024 00:15:28.534 read: IOPS=510, BW=2041KiB/s (2090kB/s)(2108KiB/1033msec) 00:15:28.534 slat (nsec): min=6412, max=25119, avg=7710.78, stdev=2338.73 00:15:28.534 clat (usec): min=234, max=41008, avg=1445.70, stdev=6532.79 00:15:28.534 lat (usec): min=242, max=41029, avg=1453.42, stdev=6534.81 00:15:28.534 clat percentiles (usec): 00:15:28.534 | 1.00th=[ 245], 5.00th=[ 273], 10.00th=[ 297], 20.00th=[ 318], 00:15:28.534 | 30.00th=[ 338], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 379], 00:15:28.534 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 478], 95.00th=[ 553], 00:15:28.534 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:28.534 | 99.99th=[41157] 00:15:28.534 write: IOPS=991, BW=3965KiB/s (4060kB/s)(4096KiB/1033msec); 0 zone resets 00:15:28.534 slat (nsec): min=7118, max=46800, avg=12430.08, stdev=3148.19 00:15:28.534 clat (usec): min=144, max=466, avg=244.42, stdev=49.76 00:15:28.534 lat (usec): min=152, max=479, avg=256.85, stdev=50.59 00:15:28.534 clat percentiles (usec): 00:15:28.534 | 1.00th=[ 151], 5.00th=[ 163], 10.00th=[ 190], 20.00th=[ 208], 00:15:28.534 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 237], 60.00th=[ 245], 00:15:28.534 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 318], 95.00th=[ 338], 00:15:28.534 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 445], 99.95th=[ 465], 00:15:28.534 | 99.99th=[ 465] 00:15:28.534 bw ( KiB/s): min= 8192, max= 8192, per=68.93%, avg=8192.00, stdev= 0.00, samples=1 00:15:28.534 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:28.534 lat (usec) : 250=43.46%, 500=54.09%, 750=1.55% 00:15:28.534 lat (msec) : 50=0.90% 00:15:28.534 cpu : usr=0.97%, sys=1.45%, ctx=1552, majf=0, minf=1 00:15:28.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:28.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:28.534 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:28.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:28.534 00:15:28.534 Run status group 0 (all jobs): 00:15:28.534 READ: bw=5795KiB/s (5934kB/s), 85.5KiB/s-2055KiB/s (87.6kB/s-2104kB/s), io=5992KiB (6136kB), run=1024-1034msec 00:15:28.534 WRITE: bw=11.6MiB/s (12.2MB/s), 1981KiB/s-4000KiB/s (2028kB/s-4096kB/s), io=12.0MiB (12.6MB), run=1024-1034msec 00:15:28.534 00:15:28.534 Disk stats (read/write): 00:15:28.534 nvme0n1: ios=571/1024, merge=0/0, ticks=675/213, in_queue=888, util=90.98% 00:15:28.534 nvme0n2: ios=42/512, merge=0/0, ticks=721/110, in_queue=831, util=87.40% 00:15:28.534 nvme0n3: ios=443/512, merge=0/0, ticks=881/124, in_queue=1005, util=91.04% 00:15:28.534 nvme0n4: ios=522/1024, merge=0/0, ticks=553/239, in_queue=792, util=89.71% 00:15:28.534 22:31:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:28.534 [global] 00:15:28.534 thread=1 00:15:28.534 invalidate=1 00:15:28.534 rw=randwrite 00:15:28.534 time_based=1 00:15:28.534 runtime=1 00:15:28.534 ioengine=libaio 00:15:28.534 direct=1 00:15:28.534 bs=4096 00:15:28.534 iodepth=1 00:15:28.534 norandommap=0 00:15:28.534 numjobs=1 00:15:28.534 00:15:28.534 verify_dump=1 00:15:28.534 verify_backlog=512 00:15:28.534 verify_state_save=0 00:15:28.534 do_verify=1 00:15:28.534 verify=crc32c-intel 00:15:28.534 [job0] 00:15:28.534 filename=/dev/nvme0n1 00:15:28.534 [job1] 00:15:28.534 filename=/dev/nvme0n2 00:15:28.534 [job2] 00:15:28.534 filename=/dev/nvme0n3 00:15:28.534 [job3] 00:15:28.534 filename=/dev/nvme0n4 00:15:28.534 Could not set queue depth (nvme0n1) 00:15:28.534 Could not set queue depth (nvme0n2) 00:15:28.534 Could not set queue depth (nvme0n3) 00:15:28.534 Could not set queue depth (nvme0n4) 00:15:28.535 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:28.535 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:28.535 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:28.535 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:28.535 fio-3.35 00:15:28.535 Starting 4 threads 00:15:29.914 00:15:29.914 job0: (groupid=0, jobs=1): err= 0: pid=4182487: Mon Jul 15 22:31:53 2024 00:15:29.914 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:29.914 slat (nsec): min=7281, max=38700, avg=8326.35, stdev=1685.79 00:15:29.914 clat (usec): min=263, max=41086, avg=358.63, stdev=1041.66 00:15:29.914 lat (usec): min=271, max=41094, avg=366.95, stdev=1041.67 00:15:29.914 clat percentiles (usec): 00:15:29.914 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 297], 00:15:29.914 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 310], 00:15:29.915 | 70.00th=[ 318], 80.00th=[ 371], 90.00th=[ 437], 95.00th=[ 461], 00:15:29.915 | 99.00th=[ 519], 99.50th=[ 562], 99.90th=[ 668], 99.95th=[41157], 00:15:29.915 | 99.99th=[41157] 00:15:29.915 write: IOPS=1932, BW=7728KiB/s (7914kB/s)(7736KiB/1001msec); 0 zone resets 00:15:29.915 slat (nsec): min=7283, max=50409, avg=11974.90, stdev=2217.11 00:15:29.915 clat (usec): min=165, max=452, avg=206.49, stdev=38.79 00:15:29.915 lat (usec): min=182, max=464, avg=218.47, stdev=39.62 00:15:29.915 clat percentiles (usec): 00:15:29.915 | 1.00th=[ 176], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 184], 00:15:29.915 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:15:29.915 | 70.00th=[ 202], 80.00th=[ 227], 90.00th=[ 269], 95.00th=[ 289], 00:15:29.915 | 99.00th=[ 343], 99.50th=[ 359], 99.90th=[ 441], 99.95th=[ 453], 00:15:29.915 | 99.99th=[ 453] 00:15:29.915 bw ( KiB/s): min= 8192, max= 8192, per=60.61%, avg=8192.00, stdev= 0.00, samples=1 00:15:29.915 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:29.915 lat (usec) : 250=47.18%, 500=51.96%, 750=0.84% 00:15:29.915 lat (msec) : 50=0.03% 00:15:29.915 cpu : usr=3.50%, sys=5.00%, ctx=3472, majf=0, minf=2 00:15:29.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.915 issued rwts: total=1536,1934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.915 job1: (groupid=0, jobs=1): err= 0: pid=4182488: Mon Jul 15 22:31:53 2024 00:15:29.915 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:15:29.915 slat (nsec): min=9553, max=23261, avg=21710.82, stdev=2924.88 00:15:29.915 clat (usec): min=40872, max=41994, avg=41135.58, stdev=361.52 00:15:29.915 lat (usec): min=40890, max=42015, avg=41157.29, stdev=360.75 00:15:29.915 clat percentiles (usec): 00:15:29.915 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:29.915 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:29.915 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:15:29.915 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:29.915 | 99.99th=[42206] 00:15:29.915 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:15:29.915 slat (nsec): min=9233, max=37696, avg=10221.80, stdev=1639.35 00:15:29.915 clat (usec): min=183, max=329, avg=208.56, stdev=19.50 00:15:29.915 lat (usec): min=193, max=339, avg=218.78, stdev=19.72 00:15:29.915 clat percentiles (usec): 00:15:29.915 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:15:29.915 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 204], 60.00th=[ 208], 00:15:29.915 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 247], 00:15:29.915 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 330], 99.95th=[ 330], 00:15:29.915 | 99.99th=[ 330] 00:15:29.915 bw ( KiB/s): min= 4096, max= 4096, per=30.31%, avg=4096.00, stdev= 0.00, samples=1 00:15:29.915 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:29.915 lat (usec) : 250=91.57%, 500=4.31% 00:15:29.915 lat (msec) : 50=4.12% 00:15:29.915 cpu : usr=0.39%, sys=0.39%, ctx=536, majf=0, minf=1 00:15:29.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.915 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.915 job2: (groupid=0, jobs=1): err= 0: pid=4182491: Mon Jul 15 22:31:53 2024 00:15:29.915 read: IOPS=405, BW=1620KiB/s (1659kB/s)(1664KiB/1027msec) 00:15:29.915 slat (nsec): min=7136, max=24843, avg=8625.14, stdev=2907.42 00:15:29.915 clat (usec): min=191, max=41551, avg=2122.43, stdev=8279.08 00:15:29.915 lat (usec): min=199, max=41563, avg=2131.05, stdev=8281.44 00:15:29.915 clat percentiles (usec): 00:15:29.915 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 269], 00:15:29.915 | 30.00th=[ 310], 40.00th=[ 347], 50.00th=[ 367], 60.00th=[ 392], 00:15:29.915 | 70.00th=[ 433], 80.00th=[ 457], 90.00th=[ 498], 95.00th=[ 586], 00:15:29.915 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:15:29.915 | 99.99th=[41681] 00:15:29.915 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:15:29.915 slat (nsec): min=7779, max=47735, avg=12125.60, stdev=2584.90 00:15:29.915 clat (usec): min=189, max=444, avg=255.49, stdev=36.99 00:15:29.915 lat (usec): min=200, max=460, avg=267.61, stdev=36.86 00:15:29.915 clat percentiles (usec): 00:15:29.915 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:15:29.915 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 260], 00:15:29.915 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 330], 00:15:29.915 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 445], 99.95th=[ 445], 00:15:29.915 | 99.99th=[ 445] 00:15:29.915 bw ( KiB/s): min= 4096, max= 4096, per=30.31%, avg=4096.00, stdev= 0.00, samples=1 00:15:29.915 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:29.915 lat (usec) : 250=33.62%, 500=61.96%, 750=2.48% 00:15:29.915 lat (msec) : 50=1.94% 00:15:29.915 cpu : usr=0.88%, sys=1.36%, ctx=928, majf=0, minf=1 00:15:29.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.915 issued rwts: total=416,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.915 job3: (groupid=0, jobs=1): err= 0: pid=4182493: Mon Jul 15 22:31:53 2024 00:15:29.915 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:15:29.915 slat (nsec): min=9045, max=22862, avg=21283.81, stdev=2978.40 00:15:29.915 clat (usec): min=40889, max=42048, avg=41109.83, stdev=362.07 00:15:29.915 lat (usec): min=40912, max=42066, avg=41131.12, stdev=360.22 00:15:29.915 clat percentiles (usec): 00:15:29.915 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:29.915 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:29.915 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:15:29.915 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:29.915 | 99.99th=[42206] 00:15:29.915 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:15:29.915 slat (nsec): min=8644, max=36523, avg=9818.25, stdev=1516.58 00:15:29.915 clat (usec): min=162, max=1190, avg=257.69, stdev=59.01 00:15:29.915 lat (usec): min=172, max=1200, avg=267.51, stdev=59.19 00:15:29.915 clat percentiles (usec): 00:15:29.915 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 204], 20.00th=[ 219], 00:15:29.915 | 30.00th=[ 235], 40.00th=[ 247], 50.00th=[ 258], 60.00th=[ 265], 00:15:29.915 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 330], 00:15:29.915 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 1188], 99.95th=[ 1188], 00:15:29.915 | 99.99th=[ 1188] 00:15:29.915 bw ( KiB/s): min= 4096, max= 4096, per=30.31%, avg=4096.00, stdev= 0.00, samples=1 00:15:29.915 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:29.915 lat (usec) : 250=41.84%, 500=54.03% 00:15:29.915 lat (msec) : 2=0.19%, 50=3.94% 00:15:29.915 cpu : usr=0.20%, sys=0.60%, ctx=533, majf=0, minf=1 00:15:29.915 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.915 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.915 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.915 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.915 00:15:29.915 Run status group 0 (all jobs): 00:15:29.915 READ: bw=7770KiB/s (7957kB/s), 83.8KiB/s-6138KiB/s (85.8kB/s-6285kB/s), io=7980KiB (8172kB), run=1001-1027msec 00:15:29.915 WRITE: bw=13.2MiB/s (13.8MB/s), 1994KiB/s-7728KiB/s (2042kB/s-7914kB/s), io=13.6MiB (14.2MB), run=1001-1027msec 00:15:29.915 00:15:29.915 Disk stats (read/write): 00:15:29.915 nvme0n1: ios=1388/1536, merge=0/0, ticks=762/306, in_queue=1068, util=94.69% 00:15:29.915 nvme0n2: ios=56/512, merge=0/0, ticks=1708/102, in_queue=1810, util=96.15% 00:15:29.915 nvme0n3: ios=458/512, merge=0/0, ticks=726/124, in_queue=850, util=91.16% 00:15:29.915 nvme0n4: ios=73/512, merge=0/0, ticks=731/132, in_queue=863, util=91.31% 00:15:29.915 22:31:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:29.915 [global] 00:15:29.915 thread=1 00:15:29.915 invalidate=1 00:15:29.915 rw=write 00:15:29.915 time_based=1 00:15:29.915 runtime=1 00:15:29.915 ioengine=libaio 00:15:29.915 direct=1 00:15:29.915 bs=4096 00:15:29.915 iodepth=128 00:15:29.915 norandommap=0 00:15:29.915 numjobs=1 00:15:29.915 00:15:29.915 verify_dump=1 00:15:29.915 verify_backlog=512 00:15:29.915 verify_state_save=0 00:15:29.915 do_verify=1 00:15:29.915 verify=crc32c-intel 00:15:29.915 [job0] 00:15:29.915 filename=/dev/nvme0n1 00:15:29.915 [job1] 00:15:29.915 filename=/dev/nvme0n2 00:15:29.915 [job2] 00:15:29.915 filename=/dev/nvme0n3 00:15:29.915 [job3] 00:15:29.915 filename=/dev/nvme0n4 00:15:29.915 Could not set queue depth (nvme0n1) 00:15:29.915 Could not set queue depth (nvme0n2) 00:15:29.915 Could not set queue depth (nvme0n3) 00:15:29.915 Could not set queue depth (nvme0n4) 00:15:30.174 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.174 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.174 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.174 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.174 fio-3.35 00:15:30.174 Starting 4 threads 00:15:31.553 00:15:31.553 job0: (groupid=0, jobs=1): err= 0: pid=4182870: Mon Jul 15 22:31:55 2024 00:15:31.553 read: IOPS=5712, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1008msec) 00:15:31.553 slat (nsec): min=1249, max=16355k, avg=93424.00, stdev=659083.62 00:15:31.553 clat (usec): min=1159, max=37164, avg=11420.74, stdev=4054.37 00:15:31.553 lat (usec): min=3686, max=37170, avg=11514.16, stdev=4087.91 00:15:31.553 clat percentiles (usec): 00:15:31.553 | 1.00th=[ 4113], 5.00th=[ 8029], 10.00th=[ 9241], 20.00th=[ 9634], 00:15:31.553 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:15:31.553 | 70.00th=[10814], 80.00th=[12911], 90.00th=[15795], 95.00th=[17433], 00:15:31.553 | 99.00th=[31589], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:15:31.553 | 99.99th=[36963] 00:15:31.553 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:15:31.553 slat (usec): min=2, max=23200, avg=71.19, stdev=429.37 00:15:31.553 clat (usec): min=1168, max=21918, avg=9564.13, stdev=2351.92 00:15:31.553 lat (usec): min=1178, max=37144, avg=9635.33, stdev=2400.33 00:15:31.553 clat percentiles (usec): 00:15:31.553 | 1.00th=[ 2769], 5.00th=[ 4424], 10.00th=[ 6194], 20.00th=[ 8094], 00:15:31.553 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:15:31.553 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10814], 95.00th=[12780], 00:15:31.553 | 99.00th=[16057], 99.50th=[16188], 99.90th=[18744], 99.95th=[19006], 00:15:31.553 | 99.99th=[21890] 00:15:31.553 bw ( KiB/s): min=23664, max=25472, per=34.81%, avg=24568.00, stdev=1278.45, samples=2 00:15:31.553 iops : min= 5916, max= 6368, avg=6142.00, stdev=319.61, samples=2 00:15:31.553 lat (msec) : 2=0.05%, 4=2.39%, 10=37.13%, 20=59.28%, 50=1.15% 00:15:31.553 cpu : usr=5.06%, sys=4.57%, ctx=793, majf=0, minf=1 00:15:31.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:31.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:31.553 issued rwts: total=5758,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.553 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:31.553 job1: (groupid=0, jobs=1): err= 0: pid=4182877: Mon Jul 15 22:31:55 2024 00:15:31.553 read: IOPS=2225, BW=8903KiB/s (9117kB/s)(8948KiB/1005msec) 00:15:31.553 slat (nsec): min=1496, max=25193k, avg=202852.69, stdev=1604226.28 00:15:31.553 clat (usec): min=3248, max=76095, avg=26407.86, stdev=15542.50 00:15:31.553 lat (usec): min=5260, max=76131, avg=26610.71, stdev=15668.00 00:15:31.553 clat percentiles (usec): 00:15:31.553 | 1.00th=[ 8455], 5.00th=[ 8717], 10.00th=[11076], 20.00th=[13829], 00:15:31.553 | 30.00th=[14222], 40.00th=[15664], 50.00th=[21890], 60.00th=[25035], 00:15:31.553 | 70.00th=[36963], 80.00th=[41681], 90.00th=[51119], 95.00th=[59507], 00:15:31.553 | 99.00th=[65799], 99.50th=[66847], 99.90th=[67634], 99.95th=[70779], 00:15:31.553 | 99.99th=[76022] 00:15:31.553 write: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec); 0 zone resets 00:15:31.553 slat (usec): min=2, max=20317, avg=201.00, stdev=1204.94 00:15:31.553 clat (usec): min=1033, max=88035, avg=26511.41, stdev=15963.63 00:15:31.553 lat (usec): min=1042, max=88047, avg=26712.41, stdev=16076.73 00:15:31.553 clat percentiles (usec): 00:15:31.553 | 1.00th=[ 2442], 5.00th=[ 6194], 10.00th=[12256], 20.00th=[13042], 00:15:31.553 | 30.00th=[15926], 40.00th=[21365], 50.00th=[21890], 60.00th=[24773], 00:15:31.553 | 70.00th=[30802], 80.00th=[37487], 90.00th=[49546], 95.00th=[60556], 00:15:31.553 | 99.00th=[79168], 99.50th=[85459], 99.90th=[87557], 99.95th=[87557], 00:15:31.553 | 99.99th=[87557] 00:15:31.553 bw ( KiB/s): min= 8712, max=11768, per=14.51%, avg=10240.00, stdev=2160.92, samples=2 00:15:31.553 iops : min= 2178, max= 2942, avg=2560.00, stdev=540.23, samples=2 00:15:31.553 lat (msec) : 2=0.46%, 4=1.17%, 10=4.44%, 20=35.02%, 50=48.97% 00:15:31.553 lat (msec) : 100=9.94% 00:15:31.553 cpu : usr=2.39%, sys=3.19%, ctx=236, majf=0, minf=1 00:15:31.553 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:31.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:31.553 issued rwts: total=2237,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.553 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:31.553 job2: (groupid=0, jobs=1): err= 0: pid=4182895: Mon Jul 15 22:31:55 2024 00:15:31.553 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:15:31.553 slat (nsec): min=1282, max=14914k, avg=134780.29, stdev=957876.59 00:15:31.553 clat (usec): min=5691, max=54861, avg=15937.61, stdev=6000.44 00:15:31.553 lat (usec): min=5697, max=54868, avg=16072.39, stdev=6094.03 00:15:31.553 clat percentiles (usec): 00:15:31.553 | 1.00th=[ 7832], 5.00th=[10945], 10.00th=[11600], 20.00th=[12649], 00:15:31.553 | 30.00th=[13042], 40.00th=[13829], 50.00th=[14484], 60.00th=[15401], 00:15:31.553 | 70.00th=[16188], 80.00th=[17171], 90.00th=[22414], 95.00th=[27395], 00:15:31.553 | 99.00th=[42206], 99.50th=[49021], 99.90th=[54789], 99.95th=[54789], 00:15:31.553 | 99.99th=[54789] 00:15:31.554 write: IOPS=3839, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1010msec); 0 zone resets 00:15:31.554 slat (usec): min=2, max=11108, avg=126.24, stdev=701.79 00:15:31.554 clat (usec): min=3293, max=54867, avg=18174.02, stdev=10699.11 00:15:31.554 lat (usec): min=3306, max=54883, avg=18300.26, stdev=10769.55 00:15:31.554 clat percentiles (usec): 00:15:31.554 | 1.00th=[ 5538], 5.00th=[ 7242], 10.00th=[ 8586], 20.00th=[10290], 00:15:31.554 | 30.00th=[10945], 40.00th=[12518], 50.00th=[15008], 60.00th=[18220], 00:15:31.554 | 70.00th=[21365], 80.00th=[22676], 90.00th=[32113], 95.00th=[46924], 00:15:31.554 | 99.00th=[52167], 99.50th=[53740], 99.90th=[54264], 99.95th=[54789], 00:15:31.554 | 99.99th=[54789] 00:15:31.554 bw ( KiB/s): min=13896, max=16112, per=21.26%, avg=15004.00, stdev=1566.95, samples=2 00:15:31.554 iops : min= 3474, max= 4028, avg=3751.00, stdev=391.74, samples=2 00:15:31.554 lat (msec) : 4=0.24%, 10=9.92%, 20=65.26%, 50=22.73%, 100=1.85% 00:15:31.554 cpu : usr=3.07%, sys=5.05%, ctx=328, majf=0, minf=1 00:15:31.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:31.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:31.554 issued rwts: total=3584,3878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:31.554 job3: (groupid=0, jobs=1): err= 0: pid=4182901: Mon Jul 15 22:31:55 2024 00:15:31.554 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:15:31.554 slat (nsec): min=1331, max=6812.4k, avg=101603.94, stdev=489721.28 00:15:31.554 clat (usec): min=7853, max=34377, avg=12729.21, stdev=2031.45 00:15:31.554 lat (usec): min=7862, max=34386, avg=12830.82, stdev=2033.07 00:15:31.554 clat percentiles (usec): 00:15:31.554 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10945], 20.00th=[11600], 00:15:31.554 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[13042], 00:15:31.554 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13960], 95.00th=[14484], 00:15:31.554 | 99.00th=[21890], 99.50th=[27657], 99.90th=[31065], 99.95th=[31065], 00:15:31.554 | 99.99th=[34341] 00:15:31.554 write: IOPS=5222, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1003msec); 0 zone resets 00:15:31.554 slat (usec): min=2, max=14646, avg=87.23, stdev=493.56 00:15:31.554 clat (usec): min=323, max=37177, avg=11857.06, stdev=3163.72 00:15:31.554 lat (usec): min=735, max=37191, avg=11944.30, stdev=3166.44 00:15:31.554 clat percentiles (usec): 00:15:31.554 | 1.00th=[ 3130], 5.00th=[ 6587], 10.00th=[ 9372], 20.00th=[10290], 00:15:31.554 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12256], 00:15:31.554 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13698], 95.00th=[15270], 00:15:31.554 | 99.00th=[25035], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:15:31.554 | 99.99th=[36963] 00:15:31.554 bw ( KiB/s): min=20480, max=20512, per=29.04%, avg=20496.00, stdev=22.63, samples=2 00:15:31.554 iops : min= 5120, max= 5128, avg=5124.00, stdev= 5.66, samples=2 00:15:31.554 lat (usec) : 500=0.01%, 750=0.01% 00:15:31.554 lat (msec) : 2=0.20%, 4=0.98%, 10=8.59%, 20=88.56%, 50=1.64% 00:15:31.554 cpu : usr=3.59%, sys=3.49%, ctx=578, majf=0, minf=1 00:15:31.554 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:31.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:31.554 issued rwts: total=5120,5238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.554 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:31.554 00:15:31.554 Run status group 0 (all jobs): 00:15:31.554 READ: bw=64.6MiB/s (67.7MB/s), 8903KiB/s-22.3MiB/s (9117kB/s-23.4MB/s), io=65.2MiB (68.4MB), run=1003-1010msec 00:15:31.554 WRITE: bw=68.9MiB/s (72.3MB/s), 9.95MiB/s-23.8MiB/s (10.4MB/s-25.0MB/s), io=69.6MiB (73.0MB), run=1003-1010msec 00:15:31.554 00:15:31.554 Disk stats (read/write): 00:15:31.554 nvme0n1: ios=4902/5120, merge=0/0, ticks=51480/44269, in_queue=95749, util=91.58% 00:15:31.554 nvme0n2: ios=2089/2048, merge=0/0, ticks=31249/32139, in_queue=63388, util=93.81% 00:15:31.554 nvme0n3: ios=3118/3247, merge=0/0, ticks=48368/55130, in_queue=103498, util=98.13% 00:15:31.554 nvme0n4: ios=4246/4608, merge=0/0, ticks=14523/17638, in_queue=32161, util=95.49% 00:15:31.554 22:31:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:31.554 [global] 00:15:31.554 thread=1 00:15:31.554 invalidate=1 00:15:31.554 rw=randwrite 00:15:31.554 time_based=1 00:15:31.554 runtime=1 00:15:31.554 ioengine=libaio 00:15:31.554 direct=1 00:15:31.554 bs=4096 00:15:31.554 iodepth=128 00:15:31.554 norandommap=0 00:15:31.554 numjobs=1 00:15:31.554 00:15:31.554 verify_dump=1 00:15:31.554 verify_backlog=512 00:15:31.554 verify_state_save=0 00:15:31.554 do_verify=1 00:15:31.554 verify=crc32c-intel 00:15:31.554 [job0] 00:15:31.554 filename=/dev/nvme0n1 00:15:31.554 [job1] 00:15:31.554 filename=/dev/nvme0n2 00:15:31.554 [job2] 00:15:31.554 filename=/dev/nvme0n3 00:15:31.554 [job3] 00:15:31.554 filename=/dev/nvme0n4 00:15:31.554 Could not set queue depth (nvme0n1) 00:15:31.554 Could not set queue depth (nvme0n2) 00:15:31.554 Could not set queue depth (nvme0n3) 00:15:31.554 Could not set queue depth (nvme0n4) 00:15:31.813 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.813 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.813 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.813 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:31.813 fio-3.35 00:15:31.813 Starting 4 threads 00:15:33.205 00:15:33.205 job0: (groupid=0, jobs=1): err= 0: pid=4183302: Mon Jul 15 22:31:56 2024 00:15:33.205 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1004msec) 00:15:33.205 slat (nsec): min=1110, max=16088k, avg=122441.41, stdev=784675.77 00:15:33.205 clat (usec): min=671, max=61224, avg=15262.80, stdev=7339.40 00:15:33.205 lat (usec): min=3480, max=61225, avg=15385.24, stdev=7378.96 00:15:33.205 clat percentiles (usec): 00:15:33.206 | 1.00th=[ 3884], 5.00th=[ 7832], 10.00th=[ 9896], 20.00th=[10683], 00:15:33.206 | 30.00th=[11469], 40.00th=[12518], 50.00th=[13566], 60.00th=[14353], 00:15:33.206 | 70.00th=[15008], 80.00th=[16712], 90.00th=[25822], 95.00th=[31327], 00:15:33.206 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:15:33.206 | 99.99th=[61080] 00:15:33.206 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:15:33.206 slat (nsec): min=1896, max=36006k, avg=115895.70, stdev=780191.00 00:15:33.206 clat (usec): min=5083, max=52730, avg=15797.75, stdev=9198.87 00:15:33.206 lat (usec): min=5087, max=52733, avg=15913.65, stdev=9246.37 00:15:33.206 clat percentiles (usec): 00:15:33.206 | 1.00th=[ 7963], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10945], 00:15:33.206 | 30.00th=[12518], 40.00th=[12649], 50.00th=[13042], 60.00th=[13173], 00:15:33.206 | 70.00th=[14091], 80.00th=[16581], 90.00th=[23987], 95.00th=[41681], 00:15:33.206 | 99.00th=[52167], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:15:33.206 | 99.99th=[52691] 00:15:33.206 bw ( KiB/s): min=13888, max=18880, per=22.29%, avg=16384.00, stdev=3529.88, samples=2 00:15:33.206 iops : min= 3472, max= 4720, avg=4096.00, stdev=882.47, samples=2 00:15:33.206 lat (usec) : 750=0.01% 00:15:33.206 lat (msec) : 4=0.51%, 10=10.96%, 20=74.95%, 50=12.05%, 100=1.51% 00:15:33.206 cpu : usr=2.49%, sys=4.39%, ctx=440, majf=0, minf=1 00:15:33.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:33.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.206 issued rwts: total=4068,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.206 job1: (groupid=0, jobs=1): err= 0: pid=4183326: Mon Jul 15 22:31:56 2024 00:15:33.206 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:15:33.206 slat (nsec): min=1072, max=12002k, avg=97285.27, stdev=699218.06 00:15:33.206 clat (usec): min=4149, max=31421, avg=12223.83, stdev=3739.60 00:15:33.206 lat (usec): min=4171, max=31437, avg=12321.11, stdev=3781.55 00:15:33.206 clat percentiles (usec): 00:15:33.206 | 1.00th=[ 5145], 5.00th=[ 7635], 10.00th=[ 8848], 20.00th=[ 9896], 00:15:33.206 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11469], 60.00th=[11731], 00:15:33.206 | 70.00th=[12256], 80.00th=[14484], 90.00th=[17433], 95.00th=[20055], 00:15:33.206 | 99.00th=[24773], 99.50th=[26084], 99.90th=[29492], 99.95th=[29492], 00:15:33.206 | 99.99th=[31327] 00:15:33.206 write: IOPS=5283, BW=20.6MiB/s (21.6MB/s)(20.8MiB/1008msec); 0 zone resets 00:15:33.206 slat (nsec): min=1945, max=15362k, avg=86705.75, stdev=492068.89 00:15:33.206 clat (usec): min=2655, max=46718, avg=12163.17, stdev=6739.46 00:15:33.206 lat (usec): min=2668, max=46728, avg=12249.87, stdev=6780.87 00:15:33.206 clat percentiles (usec): 00:15:33.206 | 1.00th=[ 3818], 5.00th=[ 5145], 10.00th=[ 6194], 20.00th=[ 7898], 00:15:33.206 | 30.00th=[ 9503], 40.00th=[10683], 50.00th=[11207], 60.00th=[11600], 00:15:33.206 | 70.00th=[11731], 80.00th=[13960], 90.00th=[17695], 95.00th=[27395], 00:15:33.206 | 99.00th=[42206], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:15:33.206 | 99.99th=[46924] 00:15:33.206 bw ( KiB/s): min=20480, max=21112, per=28.29%, avg=20796.00, stdev=446.89, samples=2 00:15:33.206 iops : min= 5120, max= 5278, avg=5199.00, stdev=111.72, samples=2 00:15:33.206 lat (msec) : 4=0.78%, 10=27.31%, 20=65.23%, 50=6.68% 00:15:33.206 cpu : usr=3.77%, sys=5.16%, ctx=596, majf=0, minf=1 00:15:33.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:33.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.206 issued rwts: total=5120,5326,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.206 job2: (groupid=0, jobs=1): err= 0: pid=4183352: Mon Jul 15 22:31:56 2024 00:15:33.206 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:15:33.206 slat (nsec): min=1104, max=16659k, avg=117095.67, stdev=675792.04 00:15:33.206 clat (usec): min=6463, max=33447, avg=14942.05, stdev=3671.85 00:15:33.206 lat (usec): min=6469, max=33450, avg=15059.15, stdev=3684.61 00:15:33.206 clat percentiles (usec): 00:15:33.206 | 1.00th=[ 8225], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[12387], 00:15:33.206 | 30.00th=[13566], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:15:33.206 | 70.00th=[15664], 80.00th=[16581], 90.00th=[19530], 95.00th=[21890], 00:15:33.206 | 99.00th=[28181], 99.50th=[32375], 99.90th=[33424], 99.95th=[33424], 00:15:33.206 | 99.99th=[33424] 00:15:33.206 write: IOPS=4152, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1002msec); 0 zone resets 00:15:33.206 slat (usec): min=2, max=18998, avg=111.98, stdev=666.26 00:15:33.206 clat (usec): min=468, max=50146, avg=15836.37, stdev=7502.73 00:15:33.206 lat (usec): min=1178, max=50159, avg=15948.35, stdev=7533.50 00:15:33.206 clat percentiles (usec): 00:15:33.206 | 1.00th=[ 2868], 5.00th=[ 5211], 10.00th=[ 7963], 20.00th=[12780], 00:15:33.206 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[15139], 00:15:33.206 | 70.00th=[16319], 80.00th=[19006], 90.00th=[23725], 95.00th=[33162], 00:15:33.206 | 99.00th=[43254], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:15:33.206 | 99.99th=[50070] 00:15:33.206 bw ( KiB/s): min=16384, max=16384, per=22.29%, avg=16384.00, stdev= 0.00, samples=2 00:15:33.206 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:15:33.206 lat (usec) : 500=0.01% 00:15:33.206 lat (msec) : 2=0.19%, 4=1.10%, 10=8.15%, 20=78.77%, 50=11.57% 00:15:33.206 lat (msec) : 100=0.21% 00:15:33.206 cpu : usr=4.20%, sys=4.00%, ctx=401, majf=0, minf=1 00:15:33.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:33.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.206 issued rwts: total=4096,4161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.206 job3: (groupid=0, jobs=1): err= 0: pid=4183363: Mon Jul 15 22:31:56 2024 00:15:33.206 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:15:33.206 slat (nsec): min=1152, max=13902k, avg=101149.99, stdev=591422.56 00:15:33.206 clat (usec): min=2279, max=35905, avg=13331.83, stdev=3761.63 00:15:33.206 lat (usec): min=2288, max=35971, avg=13432.98, stdev=3792.31 00:15:33.206 clat percentiles (usec): 00:15:33.206 | 1.00th=[ 4948], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[11076], 00:15:33.206 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12780], 60.00th=[13435], 00:15:33.206 | 70.00th=[13960], 80.00th=[14615], 90.00th=[16450], 95.00th=[21890], 00:15:33.206 | 99.00th=[28181], 99.50th=[30802], 99.90th=[33817], 99.95th=[33817], 00:15:33.206 | 99.99th=[35914] 00:15:33.206 write: IOPS=4913, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1006msec); 0 zone resets 00:15:33.206 slat (usec): min=2, max=10270, avg=98.32, stdev=625.52 00:15:33.206 clat (usec): min=1236, max=52163, avg=13389.00, stdev=6480.33 00:15:33.206 lat (usec): min=1244, max=52174, avg=13487.32, stdev=6512.26 00:15:33.206 clat percentiles (usec): 00:15:33.206 | 1.00th=[ 3785], 5.00th=[ 7373], 10.00th=[ 8979], 20.00th=[10945], 00:15:33.206 | 30.00th=[11731], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:15:33.206 | 70.00th=[13435], 80.00th=[13829], 90.00th=[16188], 95.00th=[17957], 00:15:33.206 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:15:33.206 | 99.99th=[52167] 00:15:33.206 bw ( KiB/s): min=19144, max=19384, per=26.20%, avg=19264.00, stdev=169.71, samples=2 00:15:33.206 iops : min= 4786, max= 4846, avg=4816.00, stdev=42.43, samples=2 00:15:33.206 lat (msec) : 2=0.03%, 4=1.13%, 10=10.50%, 20=83.28%, 50=4.24% 00:15:33.206 lat (msec) : 100=0.82% 00:15:33.206 cpu : usr=3.98%, sys=6.27%, ctx=346, majf=0, minf=1 00:15:33.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:33.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.206 issued rwts: total=4608,4943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.206 00:15:33.206 Run status group 0 (all jobs): 00:15:33.206 READ: bw=69.3MiB/s (72.7MB/s), 15.8MiB/s-19.8MiB/s (16.6MB/s-20.8MB/s), io=69.9MiB (73.3MB), run=1002-1008msec 00:15:33.206 WRITE: bw=71.8MiB/s (75.3MB/s), 15.9MiB/s-20.6MiB/s (16.7MB/s-21.6MB/s), io=72.4MiB (75.9MB), run=1002-1008msec 00:15:33.206 00:15:33.206 Disk stats (read/write): 00:15:33.206 nvme0n1: ios=3093/3232, merge=0/0, ticks=21229/17360, in_queue=38589, util=99.20% 00:15:33.206 nvme0n2: ios=4127/4111, merge=0/0, ticks=45250/40790, in_queue=86040, util=98.87% 00:15:33.206 nvme0n3: ios=3221/3584, merge=0/0, ticks=24934/27061, in_queue=51995, util=98.59% 00:15:33.206 nvme0n4: ios=3611/3861, merge=0/0, ticks=27102/31065, in_queue=58167, util=99.11% 00:15:33.206 22:31:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:33.206 22:31:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:33.206 22:31:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4183471 00:15:33.206 22:31:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:33.206 [global] 00:15:33.206 thread=1 00:15:33.206 invalidate=1 00:15:33.206 rw=read 00:15:33.206 time_based=1 00:15:33.206 runtime=10 00:15:33.206 ioengine=libaio 00:15:33.206 direct=1 00:15:33.206 bs=4096 00:15:33.206 iodepth=1 00:15:33.206 norandommap=1 00:15:33.206 numjobs=1 00:15:33.206 00:15:33.206 [job0] 00:15:33.206 filename=/dev/nvme0n1 00:15:33.206 [job1] 00:15:33.206 filename=/dev/nvme0n2 00:15:33.206 [job2] 00:15:33.206 filename=/dev/nvme0n3 00:15:33.206 [job3] 00:15:33.206 filename=/dev/nvme0n4 00:15:33.206 Could not set queue depth (nvme0n1) 00:15:33.206 Could not set queue depth (nvme0n2) 00:15:33.206 Could not set queue depth (nvme0n3) 00:15:33.206 Could not set queue depth (nvme0n4) 00:15:33.469 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:33.469 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:33.469 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:33.469 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:33.469 fio-3.35 00:15:33.469 Starting 4 threads 00:15:36.002 22:31:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:36.260 22:32:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:36.260 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=9334784, buflen=4096 00:15:36.260 fio: pid=4183826, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:36.518 22:32:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:36.518 22:32:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:36.518 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=12267520, buflen=4096 00:15:36.518 fio: pid=4183825, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:36.518 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=11132928, buflen=4096 00:15:36.518 fio: pid=4183779, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:36.518 22:32:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:36.518 22:32:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:36.777 22:32:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:36.777 22:32:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:36.777 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=335872, buflen=4096 00:15:36.777 fio: pid=4183802, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:36.777 00:15:36.777 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4183779: Mon Jul 15 22:32:00 2024 00:15:36.777 read: IOPS=900, BW=3602KiB/s (3689kB/s)(10.6MiB/3018msec) 00:15:36.777 slat (usec): min=6, max=28816, avg=29.49, stdev=718.04 00:15:36.777 clat (usec): min=217, max=42121, avg=1075.97, stdev=5364.96 00:15:36.777 lat (usec): min=224, max=42144, avg=1105.47, stdev=5412.28 00:15:36.777 clat percentiles (usec): 00:15:36.777 | 1.00th=[ 249], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 310], 00:15:36.777 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:15:36.777 | 70.00th=[ 371], 80.00th=[ 400], 90.00th=[ 441], 95.00th=[ 498], 00:15:36.777 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:36.777 | 99.99th=[42206] 00:15:36.777 bw ( KiB/s): min= 104, max= 7568, per=30.30%, avg=3017.60, stdev=3729.47, samples=5 00:15:36.777 iops : min= 26, max= 1892, avg=754.40, stdev=932.37, samples=5 00:15:36.777 lat (usec) : 250=1.03%, 500=94.01%, 750=3.09%, 1000=0.04% 00:15:36.777 lat (msec) : 10=0.04%, 50=1.77% 00:15:36.777 cpu : usr=0.27%, sys=0.86%, ctx=2724, majf=0, minf=1 00:15:36.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.777 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.777 issued rwts: total=2719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:36.777 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4183802: Mon Jul 15 22:32:00 2024 00:15:36.777 read: IOPS=25, BW=101KiB/s (104kB/s)(328KiB/3244msec) 00:15:36.777 slat (usec): min=10, max=10667, avg=151.09, stdev=1168.45 00:15:36.777 clat (usec): min=388, max=42149, avg=39150.79, stdev=8804.46 00:15:36.777 lat (usec): min=411, max=51845, avg=39303.43, stdev=8911.85 00:15:36.777 clat percentiles (usec): 00:15:36.777 | 1.00th=[ 388], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:36.777 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:36.777 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:15:36.777 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:36.777 | 99.99th=[42206] 00:15:36.777 bw ( KiB/s): min= 96, max= 108, per=1.00%, avg=100.67, stdev= 5.32, samples=6 00:15:36.777 iops : min= 24, max= 27, avg=25.17, stdev= 1.33, samples=6 00:15:36.777 lat (usec) : 500=2.41%, 750=2.41% 00:15:36.777 lat (msec) : 50=93.98% 00:15:36.777 cpu : usr=0.00%, sys=0.12%, ctx=85, majf=0, minf=1 00:15:36.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.777 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.777 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:36.777 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4183825: Mon Jul 15 22:32:00 2024 00:15:36.777 read: IOPS=1049, BW=4198KiB/s (4298kB/s)(11.7MiB/2854msec) 00:15:36.777 slat (usec): min=4, max=3724, avg= 9.20, stdev=67.94 00:15:36.777 clat (usec): min=249, max=42155, avg=935.00, stdev=5033.21 00:15:36.777 lat (usec): min=257, max=44949, avg=944.20, stdev=5045.37 00:15:36.777 clat percentiles (usec): 00:15:36.777 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:15:36.777 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 293], 60.00th=[ 297], 00:15:36.777 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 355], 95.00th=[ 396], 00:15:36.777 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:36.777 | 99.99th=[42206] 00:15:36.777 bw ( KiB/s): min= 96, max=11496, per=47.76%, avg=4755.20, stdev=4808.29, samples=5 00:15:36.777 iops : min= 24, max= 2874, avg=1188.80, stdev=1202.07, samples=5 00:15:36.777 lat (usec) : 250=0.03%, 500=98.16%, 750=0.20% 00:15:36.777 lat (msec) : 50=1.57% 00:15:36.777 cpu : usr=0.46%, sys=0.91%, ctx=2997, majf=0, minf=1 00:15:36.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.777 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.777 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.777 issued rwts: total=2996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.777 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:36.777 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4183826: Mon Jul 15 22:32:00 2024 00:15:36.777 read: IOPS=852, BW=3408KiB/s (3490kB/s)(9116KiB/2675msec) 00:15:36.777 slat (nsec): min=3328, max=35189, avg=8870.90, stdev=2584.57 00:15:36.777 clat (usec): min=267, max=42104, avg=1153.61, stdev=5671.64 00:15:36.777 lat (usec): min=274, max=42113, avg=1162.48, stdev=5673.09 00:15:36.777 clat percentiles (usec): 00:15:36.777 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 297], 00:15:36.777 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 343], 00:15:36.777 | 70.00th=[ 367], 80.00th=[ 392], 90.00th=[ 465], 95.00th=[ 502], 00:15:36.777 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:15:36.777 | 99.99th=[42206] 00:15:36.777 bw ( KiB/s): min= 96, max= 9376, per=36.52%, avg=3636.80, stdev=4597.19, samples=5 00:15:36.777 iops : min= 24, max= 2344, avg=909.20, stdev=1149.30, samples=5 00:15:36.777 lat (usec) : 500=94.74%, 750=3.11% 00:15:36.777 lat (msec) : 2=0.09%, 4=0.04%, 50=1.97% 00:15:36.777 cpu : usr=0.49%, sys=1.01%, ctx=2280, majf=0, minf=2 00:15:36.777 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:36.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.778 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.778 issued rwts: total=2280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.778 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:36.778 00:15:36.778 Run status group 0 (all jobs): 00:15:36.778 READ: bw=9956KiB/s (10.2MB/s), 101KiB/s-4198KiB/s (104kB/s-4298kB/s), io=31.5MiB (33.1MB), run=2675-3244msec 00:15:36.778 00:15:36.778 Disk stats (read/write): 00:15:36.778 nvme0n1: ios=2286/0, merge=0/0, ticks=3184/0, in_queue=3184, util=98.36% 00:15:36.778 nvme0n2: ios=77/0, merge=0/0, ticks=3007/0, in_queue=3007, util=94.63% 00:15:36.778 nvme0n3: ios=2994/0, merge=0/0, ticks=2740/0, in_queue=2740, util=96.06% 00:15:36.778 nvme0n4: ios=2275/0, merge=0/0, ticks=2487/0, in_queue=2487, util=96.38% 00:15:37.037 22:32:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:37.037 22:32:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:37.295 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:37.295 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:37.295 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:37.295 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:37.554 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:37.554 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 4183471 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:37.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:37.812 nvmf hotplug test: fio failed as expected 00:15:37.812 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.071 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:38.071 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:38.071 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:38.071 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:38.071 22:32:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:38.071 22:32:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:38.071 22:32:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.072 rmmod nvme_tcp 00:15:38.072 rmmod nvme_fabrics 00:15:38.072 rmmod nvme_keyring 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 4180753 ']' 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 4180753 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 4180753 ']' 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 4180753 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:38.072 22:32:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4180753 00:15:38.072 22:32:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:38.072 22:32:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:38.072 22:32:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4180753' 00:15:38.072 killing process with pid 4180753 00:15:38.072 22:32:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 4180753 00:15:38.072 22:32:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 4180753 00:15:38.331 22:32:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:38.331 22:32:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:38.331 22:32:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:38.331 22:32:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.331 22:32:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.331 22:32:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.331 22:32:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.331 22:32:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.864 22:32:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:40.864 00:15:40.864 real 0m26.153s 00:15:40.864 user 1m47.246s 00:15:40.864 sys 0m7.481s 00:15:40.864 22:32:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:40.864 22:32:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.864 ************************************ 00:15:40.864 END TEST nvmf_fio_target 00:15:40.864 ************************************ 00:15:40.864 22:32:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:40.864 22:32:04 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:40.864 22:32:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:40.864 22:32:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.864 22:32:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:40.864 ************************************ 00:15:40.864 START TEST nvmf_bdevio 00:15:40.864 ************************************ 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:40.865 * Looking for test storage... 00:15:40.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:40.865 22:32:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:46.205 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:46.205 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:46.205 Found net devices under 0000:86:00.0: cvl_0_0 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:46.205 Found net devices under 0000:86:00.1: cvl_0_1 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:46.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:15:46.205 00:15:46.205 --- 10.0.0.2 ping statistics --- 00:15:46.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.205 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:46.205 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.205 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:15:46.205 00:15:46.205 --- 10.0.0.1 ping statistics --- 00:15:46.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.205 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=4187841 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 4187841 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 4187841 ']' 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.205 22:32:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.205 [2024-07-15 22:32:09.548363] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:46.205 [2024-07-15 22:32:09.548410] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.205 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.205 [2024-07-15 22:32:09.607138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.205 [2024-07-15 22:32:09.682589] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.205 [2024-07-15 22:32:09.682631] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.205 [2024-07-15 22:32:09.682638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.205 [2024-07-15 22:32:09.682645] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.205 [2024-07-15 22:32:09.682649] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.205 [2024-07-15 22:32:09.682769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:46.205 [2024-07-15 22:32:09.682879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:46.205 [2024-07-15 22:32:09.682985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.205 [2024-07-15 22:32:09.682987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.464 [2024-07-15 22:32:10.409323] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.464 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.723 Malloc0 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:46.723 [2024-07-15 22:32:10.460759] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:46.723 { 00:15:46.723 "params": { 00:15:46.723 "name": "Nvme$subsystem", 00:15:46.723 "trtype": "$TEST_TRANSPORT", 00:15:46.723 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:46.723 "adrfam": "ipv4", 00:15:46.723 "trsvcid": "$NVMF_PORT", 00:15:46.723 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:46.723 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:46.723 "hdgst": ${hdgst:-false}, 00:15:46.723 "ddgst": ${ddgst:-false} 00:15:46.723 }, 00:15:46.723 "method": "bdev_nvme_attach_controller" 00:15:46.723 } 00:15:46.723 EOF 00:15:46.723 )") 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:46.723 22:32:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:46.723 "params": { 00:15:46.723 "name": "Nvme1", 00:15:46.723 "trtype": "tcp", 00:15:46.723 "traddr": "10.0.0.2", 00:15:46.723 "adrfam": "ipv4", 00:15:46.723 "trsvcid": "4420", 00:15:46.723 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:46.723 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:46.723 "hdgst": false, 00:15:46.723 "ddgst": false 00:15:46.723 }, 00:15:46.723 "method": "bdev_nvme_attach_controller" 00:15:46.723 }' 00:15:46.723 [2024-07-15 22:32:10.512017] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:46.723 [2024-07-15 22:32:10.512060] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4188091 ] 00:15:46.723 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.723 [2024-07-15 22:32:10.565850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:46.723 [2024-07-15 22:32:10.641618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.723 [2024-07-15 22:32:10.641713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.723 [2024-07-15 22:32:10.641715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.981 I/O targets: 00:15:46.981 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:46.981 00:15:46.981 00:15:46.981 CUnit - A unit testing framework for C - Version 2.1-3 00:15:46.981 http://cunit.sourceforge.net/ 00:15:46.981 00:15:46.981 00:15:46.981 Suite: bdevio tests on: Nvme1n1 00:15:46.981 Test: blockdev write read block ...passed 00:15:46.981 Test: blockdev write zeroes read block ...passed 00:15:46.981 Test: blockdev write zeroes read no split ...passed 00:15:46.981 Test: blockdev write zeroes read split ...passed 00:15:47.239 Test: blockdev write zeroes read split partial ...passed 00:15:47.239 Test: blockdev reset ...[2024-07-15 22:32:11.004936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:47.239 [2024-07-15 22:32:11.005002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ef6d0 (9): Bad file descriptor 00:15:47.239 [2024-07-15 22:32:11.058651] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:47.239 passed 00:15:47.239 Test: blockdev write read 8 blocks ...passed 00:15:47.239 Test: blockdev write read size > 128k ...passed 00:15:47.239 Test: blockdev write read invalid size ...passed 00:15:47.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:47.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:47.239 Test: blockdev write read max offset ...passed 00:15:47.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:47.498 Test: blockdev writev readv 8 blocks ...passed 00:15:47.498 Test: blockdev writev readv 30 x 1block ...passed 00:15:47.498 Test: blockdev writev readv block ...passed 00:15:47.498 Test: blockdev writev readv size > 128k ...passed 00:15:47.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:47.498 Test: blockdev comparev and writev ...[2024-07-15 22:32:11.311814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.498 [2024-07-15 22:32:11.311843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:47.498 [2024-07-15 22:32:11.311857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.498 [2024-07-15 22:32:11.311865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:47.498 [2024-07-15 22:32:11.312151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.498 [2024-07-15 22:32:11.312163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:47.498 [2024-07-15 22:32:11.312176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.498 [2024-07-15 22:32:11.312184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:47.498 [2024-07-15 22:32:11.312472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.498 [2024-07-15 22:32:11.312490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:47.498 [2024-07-15 22:32:11.312502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.498 [2024-07-15 22:32:11.312509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:47.498 [2024-07-15 22:32:11.312795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.498 [2024-07-15 22:32:11.312806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:47.498 [2024-07-15 22:32:11.312817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.498 [2024-07-15 22:32:11.312824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:47.498 passed 00:15:47.498 Test: blockdev nvme passthru rw ...passed 00:15:47.498 Test: blockdev nvme passthru vendor specific ...[2024-07-15 22:32:11.395671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:47.498 [2024-07-15 22:32:11.395688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:47.498 [2024-07-15 22:32:11.395839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:47.498 [2024-07-15 22:32:11.395849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:47.498 [2024-07-15 22:32:11.395995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:47.498 [2024-07-15 22:32:11.396005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:47.498 [2024-07-15 22:32:11.396155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:47.498 [2024-07-15 22:32:11.396165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:47.498 passed 00:15:47.498 Test: blockdev nvme admin passthru ...passed 00:15:47.498 Test: blockdev copy ...passed 00:15:47.498 00:15:47.498 Run Summary: Type Total Ran Passed Failed Inactive 00:15:47.498 suites 1 1 n/a 0 0 00:15:47.498 tests 23 23 23 0 0 00:15:47.498 asserts 152 152 152 0 n/a 00:15:47.498 00:15:47.498 Elapsed time = 1.318 seconds 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:47.758 rmmod nvme_tcp 00:15:47.758 rmmod nvme_fabrics 00:15:47.758 rmmod nvme_keyring 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 4187841 ']' 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 4187841 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 4187841 ']' 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 4187841 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:47.758 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4187841 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4187841' 00:15:48.017 killing process with pid 4187841 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 4187841 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 4187841 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.017 22:32:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.552 22:32:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:50.552 00:15:50.552 real 0m9.639s 00:15:50.552 user 0m12.509s 00:15:50.552 sys 0m4.275s 00:15:50.552 22:32:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:50.552 22:32:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:50.552 ************************************ 00:15:50.552 END TEST nvmf_bdevio 00:15:50.552 ************************************ 00:15:50.552 22:32:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:50.552 22:32:14 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:50.552 22:32:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:50.552 22:32:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.552 22:32:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:50.552 ************************************ 00:15:50.552 START TEST nvmf_auth_target 00:15:50.552 ************************************ 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:50.552 * Looking for test storage... 00:15:50.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.552 22:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:50.553 22:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:55.826 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:55.826 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.826 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:55.827 Found net devices under 0000:86:00.0: cvl_0_0 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:55.827 Found net devices under 0000:86:00.1: cvl_0_1 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.827 22:32:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:55.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:15:55.827 00:15:55.827 --- 10.0.0.2 ping statistics --- 00:15:55.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.827 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:15:55.827 00:15:55.827 --- 10.0.0.1 ping statistics --- 00:15:55.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.827 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4191612 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4191612 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4191612 ']' 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:55.827 22:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=4191857 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c840f02756c7ba82cfecd4f9431dae3686059fde5d0bdf8e 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yCZ 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c840f02756c7ba82cfecd4f9431dae3686059fde5d0bdf8e 0 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c840f02756c7ba82cfecd4f9431dae3686059fde5d0bdf8e 0 00:15:56.394 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c840f02756c7ba82cfecd4f9431dae3686059fde5d0bdf8e 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yCZ 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yCZ 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.yCZ 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9dcb6895b1ca62654fa79220252235162ffdc36f21e1750bb0a9d2c51416724d 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.MBm 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9dcb6895b1ca62654fa79220252235162ffdc36f21e1750bb0a9d2c51416724d 3 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9dcb6895b1ca62654fa79220252235162ffdc36f21e1750bb0a9d2c51416724d 3 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9dcb6895b1ca62654fa79220252235162ffdc36f21e1750bb0a9d2c51416724d 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.MBm 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.MBm 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.MBm 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a89d1b7ca212e64c59a0b889ee9d833a 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7mT 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a89d1b7ca212e64c59a0b889ee9d833a 1 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a89d1b7ca212e64c59a0b889ee9d833a 1 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a89d1b7ca212e64c59a0b889ee9d833a 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7mT 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7mT 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.7mT 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4ef662da62537c81f4b822d6de080980d1885d3455bc7ee 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8WY 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4ef662da62537c81f4b822d6de080980d1885d3455bc7ee 2 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4ef662da62537c81f4b822d6de080980d1885d3455bc7ee 2 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4ef662da62537c81f4b822d6de080980d1885d3455bc7ee 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8WY 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8WY 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.8WY 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6779d4cc7f3c2d1c5e23a5eca08543090c00e116a8e14886 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.lHB 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6779d4cc7f3c2d1c5e23a5eca08543090c00e116a8e14886 2 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6779d4cc7f3c2d1c5e23a5eca08543090c00e116a8e14886 2 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6779d4cc7f3c2d1c5e23a5eca08543090c00e116a8e14886 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:56.395 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.lHB 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.lHB 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.lHB 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c20f54a385f66e70008e4569b87d784e 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.pBx 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c20f54a385f66e70008e4569b87d784e 1 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c20f54a385f66e70008e4569b87d784e 1 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c20f54a385f66e70008e4569b87d784e 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.pBx 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.pBx 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.pBx 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6d74e887219a001d08105a9603357eb13528c2eb8ef0bfedb4f8b05277077e84 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PrT 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6d74e887219a001d08105a9603357eb13528c2eb8ef0bfedb4f8b05277077e84 3 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6d74e887219a001d08105a9603357eb13528c2eb8ef0bfedb4f8b05277077e84 3 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6d74e887219a001d08105a9603357eb13528c2eb8ef0bfedb4f8b05277077e84 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PrT 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PrT 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.PrT 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 4191612 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4191612 ']' 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.656 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 4191857 /var/tmp/host.sock 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 4191857 ']' 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:56.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.914 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.173 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.173 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:57.173 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yCZ 00:15:57.173 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.173 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.173 22:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.173 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.yCZ 00:15:57.173 22:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.yCZ 00:15:57.173 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.MBm ]] 00:15:57.173 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MBm 00:15:57.173 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.173 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.173 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.173 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MBm 00:15:57.173 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MBm 00:15:57.432 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:57.432 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7mT 00:15:57.432 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.432 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.432 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.432 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.7mT 00:15:57.432 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.7mT 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.8WY ]] 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8WY 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8WY 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8WY 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.lHB 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.lHB 00:15:57.690 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.lHB 00:15:57.949 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.pBx ]] 00:15:57.949 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pBx 00:15:57.949 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.949 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.949 22:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.950 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pBx 00:15:57.950 22:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pBx 00:15:58.208 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:58.208 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PrT 00:15:58.208 22:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.208 22:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.208 22:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.208 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.PrT 00:15:58.208 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.PrT 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.466 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.740 00:15:58.740 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.740 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.740 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.998 { 00:15:58.998 "cntlid": 1, 00:15:58.998 "qid": 0, 00:15:58.998 "state": "enabled", 00:15:58.998 "thread": "nvmf_tgt_poll_group_000", 00:15:58.998 "listen_address": { 00:15:58.998 "trtype": "TCP", 00:15:58.998 "adrfam": "IPv4", 00:15:58.998 "traddr": "10.0.0.2", 00:15:58.998 "trsvcid": "4420" 00:15:58.998 }, 00:15:58.998 "peer_address": { 00:15:58.998 "trtype": "TCP", 00:15:58.998 "adrfam": "IPv4", 00:15:58.998 "traddr": "10.0.0.1", 00:15:58.998 "trsvcid": "54068" 00:15:58.998 }, 00:15:58.998 "auth": { 00:15:58.998 "state": "completed", 00:15:58.998 "digest": "sha256", 00:15:58.998 "dhgroup": "null" 00:15:58.998 } 00:15:58.998 } 00:15:58.998 ]' 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.998 22:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.256 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:15:59.827 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.827 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.827 22:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.827 22:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.827 22:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.827 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.827 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:59.827 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:00.084 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:00.084 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.084 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:00.084 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:00.084 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:00.084 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.084 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.084 22:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.084 22:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.084 22:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.084 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.084 22:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.084 00:16:00.084 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.084 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.084 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.341 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.341 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.341 22:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.341 22:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.341 22:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.341 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.341 { 00:16:00.341 "cntlid": 3, 00:16:00.341 "qid": 0, 00:16:00.341 "state": "enabled", 00:16:00.341 "thread": "nvmf_tgt_poll_group_000", 00:16:00.341 "listen_address": { 00:16:00.341 "trtype": "TCP", 00:16:00.341 "adrfam": "IPv4", 00:16:00.341 "traddr": "10.0.0.2", 00:16:00.341 "trsvcid": "4420" 00:16:00.341 }, 00:16:00.341 "peer_address": { 00:16:00.341 "trtype": "TCP", 00:16:00.341 "adrfam": "IPv4", 00:16:00.341 "traddr": "10.0.0.1", 00:16:00.341 "trsvcid": "54092" 00:16:00.341 }, 00:16:00.341 "auth": { 00:16:00.341 "state": "completed", 00:16:00.341 "digest": "sha256", 00:16:00.341 "dhgroup": "null" 00:16:00.341 } 00:16:00.341 } 00:16:00.341 ]' 00:16:00.341 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.341 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.341 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.598 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:00.598 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.598 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.598 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.598 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.598 22:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:16:01.163 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.163 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.163 22:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.163 22:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.163 22:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.164 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.164 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:01.164 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:01.421 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:01.421 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.421 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:01.421 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:01.421 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:01.421 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.421 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.421 22:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.421 22:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.421 22:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.421 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.422 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.681 00:16:01.681 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.681 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.681 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.965 { 00:16:01.965 "cntlid": 5, 00:16:01.965 "qid": 0, 00:16:01.965 "state": "enabled", 00:16:01.965 "thread": "nvmf_tgt_poll_group_000", 00:16:01.965 "listen_address": { 00:16:01.965 "trtype": "TCP", 00:16:01.965 "adrfam": "IPv4", 00:16:01.965 "traddr": "10.0.0.2", 00:16:01.965 "trsvcid": "4420" 00:16:01.965 }, 00:16:01.965 "peer_address": { 00:16:01.965 "trtype": "TCP", 00:16:01.965 "adrfam": "IPv4", 00:16:01.965 "traddr": "10.0.0.1", 00:16:01.965 "trsvcid": "54116" 00:16:01.965 }, 00:16:01.965 "auth": { 00:16:01.965 "state": "completed", 00:16:01.965 "digest": "sha256", 00:16:01.965 "dhgroup": "null" 00:16:01.965 } 00:16:01.965 } 00:16:01.965 ]' 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.965 22:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.232 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.797 22:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.055 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.055 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:03.055 00:16:03.055 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.055 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.055 22:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.313 22:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.313 22:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.313 22:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.313 22:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.313 22:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.313 22:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.313 { 00:16:03.313 "cntlid": 7, 00:16:03.313 "qid": 0, 00:16:03.313 "state": "enabled", 00:16:03.313 "thread": "nvmf_tgt_poll_group_000", 00:16:03.313 "listen_address": { 00:16:03.313 "trtype": "TCP", 00:16:03.313 "adrfam": "IPv4", 00:16:03.313 "traddr": "10.0.0.2", 00:16:03.313 "trsvcid": "4420" 00:16:03.313 }, 00:16:03.313 "peer_address": { 00:16:03.313 "trtype": "TCP", 00:16:03.313 "adrfam": "IPv4", 00:16:03.313 "traddr": "10.0.0.1", 00:16:03.313 "trsvcid": "42540" 00:16:03.313 }, 00:16:03.313 "auth": { 00:16:03.313 "state": "completed", 00:16:03.313 "digest": "sha256", 00:16:03.313 "dhgroup": "null" 00:16:03.313 } 00:16:03.313 } 00:16:03.313 ]' 00:16:03.313 22:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.313 22:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.313 22:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.313 22:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:03.313 22:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.572 22:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.572 22:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.572 22:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.572 22:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:16:04.138 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.138 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.138 22:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.138 22:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.138 22:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.138 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.138 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.138 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:04.138 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:04.396 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:04.396 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.396 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:04.396 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:04.396 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:04.396 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.396 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.396 22:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.396 22:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.396 22:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.396 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.396 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.654 00:16:04.654 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.654 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.654 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.913 { 00:16:04.913 "cntlid": 9, 00:16:04.913 "qid": 0, 00:16:04.913 "state": "enabled", 00:16:04.913 "thread": "nvmf_tgt_poll_group_000", 00:16:04.913 "listen_address": { 00:16:04.913 "trtype": "TCP", 00:16:04.913 "adrfam": "IPv4", 00:16:04.913 "traddr": "10.0.0.2", 00:16:04.913 "trsvcid": "4420" 00:16:04.913 }, 00:16:04.913 "peer_address": { 00:16:04.913 "trtype": "TCP", 00:16:04.913 "adrfam": "IPv4", 00:16:04.913 "traddr": "10.0.0.1", 00:16:04.913 "trsvcid": "42546" 00:16:04.913 }, 00:16:04.913 "auth": { 00:16:04.913 "state": "completed", 00:16:04.913 "digest": "sha256", 00:16:04.913 "dhgroup": "ffdhe2048" 00:16:04.913 } 00:16:04.913 } 00:16:04.913 ]' 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.913 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.171 22:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.736 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.994 00:16:05.994 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.994 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.994 22:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.252 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.252 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.252 22:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.252 22:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.252 22:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.252 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.252 { 00:16:06.252 "cntlid": 11, 00:16:06.252 "qid": 0, 00:16:06.252 "state": "enabled", 00:16:06.252 "thread": "nvmf_tgt_poll_group_000", 00:16:06.252 "listen_address": { 00:16:06.252 "trtype": "TCP", 00:16:06.252 "adrfam": "IPv4", 00:16:06.252 "traddr": "10.0.0.2", 00:16:06.252 "trsvcid": "4420" 00:16:06.252 }, 00:16:06.252 "peer_address": { 00:16:06.252 "trtype": "TCP", 00:16:06.252 "adrfam": "IPv4", 00:16:06.252 "traddr": "10.0.0.1", 00:16:06.252 "trsvcid": "42560" 00:16:06.252 }, 00:16:06.252 "auth": { 00:16:06.252 "state": "completed", 00:16:06.252 "digest": "sha256", 00:16:06.252 "dhgroup": "ffdhe2048" 00:16:06.252 } 00:16:06.252 } 00:16:06.252 ]' 00:16:06.252 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.252 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.252 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.252 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:06.252 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.510 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.510 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.510 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.510 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:16:07.075 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.075 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.075 22:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.075 22:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.075 22:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.075 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.075 22:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:07.075 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:07.333 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:07.333 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.333 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.333 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:07.333 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:07.333 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.333 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.333 22:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.333 22:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.333 22:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.333 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.333 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.591 00:16:07.591 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.591 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.591 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.850 { 00:16:07.850 "cntlid": 13, 00:16:07.850 "qid": 0, 00:16:07.850 "state": "enabled", 00:16:07.850 "thread": "nvmf_tgt_poll_group_000", 00:16:07.850 "listen_address": { 00:16:07.850 "trtype": "TCP", 00:16:07.850 "adrfam": "IPv4", 00:16:07.850 "traddr": "10.0.0.2", 00:16:07.850 "trsvcid": "4420" 00:16:07.850 }, 00:16:07.850 "peer_address": { 00:16:07.850 "trtype": "TCP", 00:16:07.850 "adrfam": "IPv4", 00:16:07.850 "traddr": "10.0.0.1", 00:16:07.850 "trsvcid": "42584" 00:16:07.850 }, 00:16:07.850 "auth": { 00:16:07.850 "state": "completed", 00:16:07.850 "digest": "sha256", 00:16:07.850 "dhgroup": "ffdhe2048" 00:16:07.850 } 00:16:07.850 } 00:16:07.850 ]' 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.850 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.108 22:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:16:08.674 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.674 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.674 22:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.674 22:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.674 22:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.674 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.674 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:08.674 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:08.674 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:08.674 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.674 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:08.675 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:08.675 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:08.675 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.675 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:08.675 22:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.675 22:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.675 22:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.675 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.675 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.933 00:16:08.933 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.933 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.933 22:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.192 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.192 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.192 22:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.192 22:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.192 22:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.192 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.192 { 00:16:09.192 "cntlid": 15, 00:16:09.192 "qid": 0, 00:16:09.192 "state": "enabled", 00:16:09.192 "thread": "nvmf_tgt_poll_group_000", 00:16:09.192 "listen_address": { 00:16:09.192 "trtype": "TCP", 00:16:09.192 "adrfam": "IPv4", 00:16:09.192 "traddr": "10.0.0.2", 00:16:09.192 "trsvcid": "4420" 00:16:09.192 }, 00:16:09.192 "peer_address": { 00:16:09.192 "trtype": "TCP", 00:16:09.192 "adrfam": "IPv4", 00:16:09.192 "traddr": "10.0.0.1", 00:16:09.192 "trsvcid": "42594" 00:16:09.192 }, 00:16:09.192 "auth": { 00:16:09.192 "state": "completed", 00:16:09.192 "digest": "sha256", 00:16:09.192 "dhgroup": "ffdhe2048" 00:16:09.192 } 00:16:09.192 } 00:16:09.192 ]' 00:16:09.192 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.192 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.192 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.192 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:09.192 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.462 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.462 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.462 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.462 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:16:10.032 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.032 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.032 22:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.032 22:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.032 22:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.032 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.032 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.032 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:10.032 22:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:10.290 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:10.290 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.290 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:10.290 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:10.290 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:10.290 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.290 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.290 22:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.290 22:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.290 22:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.290 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.290 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.548 00:16:10.548 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.548 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.548 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.548 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.548 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.548 22:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.548 22:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.806 22:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.806 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.806 { 00:16:10.806 "cntlid": 17, 00:16:10.806 "qid": 0, 00:16:10.806 "state": "enabled", 00:16:10.806 "thread": "nvmf_tgt_poll_group_000", 00:16:10.806 "listen_address": { 00:16:10.806 "trtype": "TCP", 00:16:10.806 "adrfam": "IPv4", 00:16:10.806 "traddr": "10.0.0.2", 00:16:10.806 "trsvcid": "4420" 00:16:10.806 }, 00:16:10.806 "peer_address": { 00:16:10.806 "trtype": "TCP", 00:16:10.806 "adrfam": "IPv4", 00:16:10.806 "traddr": "10.0.0.1", 00:16:10.806 "trsvcid": "42636" 00:16:10.806 }, 00:16:10.806 "auth": { 00:16:10.806 "state": "completed", 00:16:10.806 "digest": "sha256", 00:16:10.806 "dhgroup": "ffdhe3072" 00:16:10.806 } 00:16:10.806 } 00:16:10.806 ]' 00:16:10.806 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.806 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.806 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.806 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:10.806 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.806 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.806 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.806 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.064 22:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.630 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.888 00:16:11.888 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.888 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.888 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.146 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.146 22:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.146 22:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.146 22:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.146 22:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.146 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.146 { 00:16:12.146 "cntlid": 19, 00:16:12.146 "qid": 0, 00:16:12.146 "state": "enabled", 00:16:12.146 "thread": "nvmf_tgt_poll_group_000", 00:16:12.146 "listen_address": { 00:16:12.146 "trtype": "TCP", 00:16:12.146 "adrfam": "IPv4", 00:16:12.146 "traddr": "10.0.0.2", 00:16:12.146 "trsvcid": "4420" 00:16:12.146 }, 00:16:12.146 "peer_address": { 00:16:12.146 "trtype": "TCP", 00:16:12.146 "adrfam": "IPv4", 00:16:12.146 "traddr": "10.0.0.1", 00:16:12.146 "trsvcid": "42666" 00:16:12.146 }, 00:16:12.146 "auth": { 00:16:12.146 "state": "completed", 00:16:12.146 "digest": "sha256", 00:16:12.146 "dhgroup": "ffdhe3072" 00:16:12.146 } 00:16:12.146 } 00:16:12.146 ]' 00:16:12.146 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.146 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.146 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.146 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.146 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.146 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.146 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.146 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.405 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:16:12.970 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.970 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.970 22:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.970 22:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.970 22:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.970 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.970 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:12.970 22:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:13.227 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:13.227 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.227 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:13.227 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:13.227 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:13.227 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.227 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.227 22:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.227 22:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.227 22:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.227 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.227 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.485 00:16:13.485 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.485 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.485 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.741 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.741 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.741 22:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.741 22:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.741 22:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.741 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.741 { 00:16:13.741 "cntlid": 21, 00:16:13.741 "qid": 0, 00:16:13.741 "state": "enabled", 00:16:13.741 "thread": "nvmf_tgt_poll_group_000", 00:16:13.741 "listen_address": { 00:16:13.741 "trtype": "TCP", 00:16:13.741 "adrfam": "IPv4", 00:16:13.741 "traddr": "10.0.0.2", 00:16:13.741 "trsvcid": "4420" 00:16:13.742 }, 00:16:13.742 "peer_address": { 00:16:13.742 "trtype": "TCP", 00:16:13.742 "adrfam": "IPv4", 00:16:13.742 "traddr": "10.0.0.1", 00:16:13.742 "trsvcid": "40292" 00:16:13.742 }, 00:16:13.742 "auth": { 00:16:13.742 "state": "completed", 00:16:13.742 "digest": "sha256", 00:16:13.742 "dhgroup": "ffdhe3072" 00:16:13.742 } 00:16:13.742 } 00:16:13.742 ]' 00:16:13.742 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.742 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.742 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.742 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.742 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.742 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.742 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.742 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.997 22:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:16:14.561 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.561 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.561 22:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.561 22:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.561 22:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.561 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.561 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:14.561 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:14.819 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:14.819 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.819 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.819 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:14.819 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:14.819 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.819 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:14.819 22:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.819 22:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.819 22:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.819 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.819 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:15.076 00:16:15.076 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.076 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.076 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.076 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.076 22:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.076 22:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.076 22:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.076 22:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.076 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.076 { 00:16:15.076 "cntlid": 23, 00:16:15.076 "qid": 0, 00:16:15.076 "state": "enabled", 00:16:15.076 "thread": "nvmf_tgt_poll_group_000", 00:16:15.076 "listen_address": { 00:16:15.076 "trtype": "TCP", 00:16:15.076 "adrfam": "IPv4", 00:16:15.076 "traddr": "10.0.0.2", 00:16:15.076 "trsvcid": "4420" 00:16:15.076 }, 00:16:15.076 "peer_address": { 00:16:15.076 "trtype": "TCP", 00:16:15.076 "adrfam": "IPv4", 00:16:15.076 "traddr": "10.0.0.1", 00:16:15.076 "trsvcid": "40322" 00:16:15.076 }, 00:16:15.076 "auth": { 00:16:15.076 "state": "completed", 00:16:15.076 "digest": "sha256", 00:16:15.076 "dhgroup": "ffdhe3072" 00:16:15.076 } 00:16:15.076 } 00:16:15.076 ]' 00:16:15.076 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.076 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.334 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.334 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.334 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.334 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.334 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.334 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.598 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:16:16.183 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.183 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.183 22:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.183 22:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.183 22:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.183 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.183 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.183 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:16.183 22:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:16.183 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:16.183 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.183 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.183 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:16.183 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:16.183 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.183 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.183 22:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.183 22:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.183 22:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.183 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.183 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.443 00:16:16.443 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.443 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.443 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.700 { 00:16:16.700 "cntlid": 25, 00:16:16.700 "qid": 0, 00:16:16.700 "state": "enabled", 00:16:16.700 "thread": "nvmf_tgt_poll_group_000", 00:16:16.700 "listen_address": { 00:16:16.700 "trtype": "TCP", 00:16:16.700 "adrfam": "IPv4", 00:16:16.700 "traddr": "10.0.0.2", 00:16:16.700 "trsvcid": "4420" 00:16:16.700 }, 00:16:16.700 "peer_address": { 00:16:16.700 "trtype": "TCP", 00:16:16.700 "adrfam": "IPv4", 00:16:16.700 "traddr": "10.0.0.1", 00:16:16.700 "trsvcid": "40354" 00:16:16.700 }, 00:16:16.700 "auth": { 00:16:16.700 "state": "completed", 00:16:16.700 "digest": "sha256", 00:16:16.700 "dhgroup": "ffdhe4096" 00:16:16.700 } 00:16:16.700 } 00:16:16.700 ]' 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.700 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.958 22:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:16:17.522 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.522 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.522 22:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.522 22:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.522 22:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.522 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.522 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:17.522 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:17.779 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:17.779 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.779 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.779 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:17.779 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:17.779 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.779 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.779 22:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.779 22:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.779 22:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.779 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.779 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.036 00:16:18.036 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.036 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.036 22:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.293 { 00:16:18.293 "cntlid": 27, 00:16:18.293 "qid": 0, 00:16:18.293 "state": "enabled", 00:16:18.293 "thread": "nvmf_tgt_poll_group_000", 00:16:18.293 "listen_address": { 00:16:18.293 "trtype": "TCP", 00:16:18.293 "adrfam": "IPv4", 00:16:18.293 "traddr": "10.0.0.2", 00:16:18.293 "trsvcid": "4420" 00:16:18.293 }, 00:16:18.293 "peer_address": { 00:16:18.293 "trtype": "TCP", 00:16:18.293 "adrfam": "IPv4", 00:16:18.293 "traddr": "10.0.0.1", 00:16:18.293 "trsvcid": "40396" 00:16:18.293 }, 00:16:18.293 "auth": { 00:16:18.293 "state": "completed", 00:16:18.293 "digest": "sha256", 00:16:18.293 "dhgroup": "ffdhe4096" 00:16:18.293 } 00:16:18.293 } 00:16:18.293 ]' 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.293 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.549 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:16:19.111 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.111 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.111 22:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.111 22:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.111 22:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.111 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.111 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:19.111 22:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:19.374 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:19.374 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.374 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:19.374 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:19.374 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:19.374 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.374 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.374 22:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.374 22:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.374 22:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.374 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.374 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.634 00:16:19.634 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.634 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.634 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.634 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.634 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.634 22:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.634 22:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.634 22:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.634 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.634 { 00:16:19.634 "cntlid": 29, 00:16:19.634 "qid": 0, 00:16:19.634 "state": "enabled", 00:16:19.634 "thread": "nvmf_tgt_poll_group_000", 00:16:19.634 "listen_address": { 00:16:19.634 "trtype": "TCP", 00:16:19.634 "adrfam": "IPv4", 00:16:19.634 "traddr": "10.0.0.2", 00:16:19.634 "trsvcid": "4420" 00:16:19.634 }, 00:16:19.634 "peer_address": { 00:16:19.634 "trtype": "TCP", 00:16:19.634 "adrfam": "IPv4", 00:16:19.634 "traddr": "10.0.0.1", 00:16:19.634 "trsvcid": "40424" 00:16:19.634 }, 00:16:19.634 "auth": { 00:16:19.634 "state": "completed", 00:16:19.634 "digest": "sha256", 00:16:19.634 "dhgroup": "ffdhe4096" 00:16:19.634 } 00:16:19.634 } 00:16:19.634 ]' 00:16:19.634 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.634 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.634 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.891 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.891 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.891 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.891 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.892 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.892 22:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:16:20.457 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.457 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.457 22:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.457 22:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.457 22:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.457 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.457 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:20.457 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:20.715 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:20.715 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.715 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.715 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:20.715 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:20.715 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.715 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:20.715 22:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.715 22:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.715 22:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.715 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.715 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.973 00:16:20.973 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.973 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.973 22:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.231 { 00:16:21.231 "cntlid": 31, 00:16:21.231 "qid": 0, 00:16:21.231 "state": "enabled", 00:16:21.231 "thread": "nvmf_tgt_poll_group_000", 00:16:21.231 "listen_address": { 00:16:21.231 "trtype": "TCP", 00:16:21.231 "adrfam": "IPv4", 00:16:21.231 "traddr": "10.0.0.2", 00:16:21.231 "trsvcid": "4420" 00:16:21.231 }, 00:16:21.231 "peer_address": { 00:16:21.231 "trtype": "TCP", 00:16:21.231 "adrfam": "IPv4", 00:16:21.231 "traddr": "10.0.0.1", 00:16:21.231 "trsvcid": "40450" 00:16:21.231 }, 00:16:21.231 "auth": { 00:16:21.231 "state": "completed", 00:16:21.231 "digest": "sha256", 00:16:21.231 "dhgroup": "ffdhe4096" 00:16:21.231 } 00:16:21.231 } 00:16:21.231 ]' 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.231 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.488 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:16:22.061 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.061 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.061 22:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.061 22:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.061 22:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.061 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.061 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.061 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:22.061 22:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:22.318 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:22.318 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.318 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:22.318 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:22.318 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:22.318 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.318 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.318 22:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.318 22:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.318 22:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.318 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.318 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.575 00:16:22.575 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.575 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.575 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.833 { 00:16:22.833 "cntlid": 33, 00:16:22.833 "qid": 0, 00:16:22.833 "state": "enabled", 00:16:22.833 "thread": "nvmf_tgt_poll_group_000", 00:16:22.833 "listen_address": { 00:16:22.833 "trtype": "TCP", 00:16:22.833 "adrfam": "IPv4", 00:16:22.833 "traddr": "10.0.0.2", 00:16:22.833 "trsvcid": "4420" 00:16:22.833 }, 00:16:22.833 "peer_address": { 00:16:22.833 "trtype": "TCP", 00:16:22.833 "adrfam": "IPv4", 00:16:22.833 "traddr": "10.0.0.1", 00:16:22.833 "trsvcid": "40476" 00:16:22.833 }, 00:16:22.833 "auth": { 00:16:22.833 "state": "completed", 00:16:22.833 "digest": "sha256", 00:16:22.833 "dhgroup": "ffdhe6144" 00:16:22.833 } 00:16:22.833 } 00:16:22.833 ]' 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.833 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.092 22:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:16:23.655 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.655 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.655 22:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.655 22:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.655 22:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.655 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.655 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:23.655 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:23.912 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:23.912 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.912 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:23.912 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:23.912 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:23.912 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.912 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.912 22:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.912 22:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.912 22:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.912 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.912 22:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.169 00:16:24.169 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.169 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.169 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.427 { 00:16:24.427 "cntlid": 35, 00:16:24.427 "qid": 0, 00:16:24.427 "state": "enabled", 00:16:24.427 "thread": "nvmf_tgt_poll_group_000", 00:16:24.427 "listen_address": { 00:16:24.427 "trtype": "TCP", 00:16:24.427 "adrfam": "IPv4", 00:16:24.427 "traddr": "10.0.0.2", 00:16:24.427 "trsvcid": "4420" 00:16:24.427 }, 00:16:24.427 "peer_address": { 00:16:24.427 "trtype": "TCP", 00:16:24.427 "adrfam": "IPv4", 00:16:24.427 "traddr": "10.0.0.1", 00:16:24.427 "trsvcid": "46434" 00:16:24.427 }, 00:16:24.427 "auth": { 00:16:24.427 "state": "completed", 00:16:24.427 "digest": "sha256", 00:16:24.427 "dhgroup": "ffdhe6144" 00:16:24.427 } 00:16:24.427 } 00:16:24.427 ]' 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.427 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.684 22:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:16:25.249 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.249 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.249 22:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.249 22:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.249 22:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.249 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.249 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:25.249 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:25.507 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:25.507 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.507 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.507 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:25.507 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:25.507 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.507 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.507 22:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.507 22:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.507 22:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.507 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.507 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.765 00:16:25.765 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.765 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.765 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.023 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.023 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.023 22:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.023 22:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.023 22:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.023 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.023 { 00:16:26.023 "cntlid": 37, 00:16:26.023 "qid": 0, 00:16:26.023 "state": "enabled", 00:16:26.023 "thread": "nvmf_tgt_poll_group_000", 00:16:26.023 "listen_address": { 00:16:26.023 "trtype": "TCP", 00:16:26.023 "adrfam": "IPv4", 00:16:26.023 "traddr": "10.0.0.2", 00:16:26.023 "trsvcid": "4420" 00:16:26.023 }, 00:16:26.023 "peer_address": { 00:16:26.023 "trtype": "TCP", 00:16:26.023 "adrfam": "IPv4", 00:16:26.023 "traddr": "10.0.0.1", 00:16:26.023 "trsvcid": "46464" 00:16:26.023 }, 00:16:26.023 "auth": { 00:16:26.023 "state": "completed", 00:16:26.023 "digest": "sha256", 00:16:26.023 "dhgroup": "ffdhe6144" 00:16:26.024 } 00:16:26.024 } 00:16:26.024 ]' 00:16:26.024 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.024 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.024 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.024 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.024 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.024 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.024 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.024 22:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.282 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:16:26.848 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.848 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.848 22:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.848 22:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.848 22:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.848 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.848 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:26.848 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:27.106 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:27.106 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.106 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.106 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:27.106 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.106 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.106 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:27.106 22:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.106 22:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.106 22:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.106 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.106 22:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.364 00:16:27.364 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.364 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.364 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.622 { 00:16:27.622 "cntlid": 39, 00:16:27.622 "qid": 0, 00:16:27.622 "state": "enabled", 00:16:27.622 "thread": "nvmf_tgt_poll_group_000", 00:16:27.622 "listen_address": { 00:16:27.622 "trtype": "TCP", 00:16:27.622 "adrfam": "IPv4", 00:16:27.622 "traddr": "10.0.0.2", 00:16:27.622 "trsvcid": "4420" 00:16:27.622 }, 00:16:27.622 "peer_address": { 00:16:27.622 "trtype": "TCP", 00:16:27.622 "adrfam": "IPv4", 00:16:27.622 "traddr": "10.0.0.1", 00:16:27.622 "trsvcid": "46498" 00:16:27.622 }, 00:16:27.622 "auth": { 00:16:27.622 "state": "completed", 00:16:27.622 "digest": "sha256", 00:16:27.622 "dhgroup": "ffdhe6144" 00:16:27.622 } 00:16:27.622 } 00:16:27.622 ]' 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.622 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.880 22:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:16:28.444 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.444 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.444 22:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.444 22:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.444 22:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.444 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.444 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.444 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:28.444 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:28.702 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:28.702 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.702 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.702 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:28.702 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:28.702 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.702 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.702 22:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.702 22:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.702 22:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.702 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.702 22:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.268 00:16:29.268 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.268 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.268 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.268 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.268 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.268 22:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.268 22:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.268 22:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.268 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.268 { 00:16:29.268 "cntlid": 41, 00:16:29.268 "qid": 0, 00:16:29.268 "state": "enabled", 00:16:29.268 "thread": "nvmf_tgt_poll_group_000", 00:16:29.268 "listen_address": { 00:16:29.268 "trtype": "TCP", 00:16:29.268 "adrfam": "IPv4", 00:16:29.268 "traddr": "10.0.0.2", 00:16:29.268 "trsvcid": "4420" 00:16:29.268 }, 00:16:29.268 "peer_address": { 00:16:29.268 "trtype": "TCP", 00:16:29.268 "adrfam": "IPv4", 00:16:29.268 "traddr": "10.0.0.1", 00:16:29.268 "trsvcid": "46518" 00:16:29.268 }, 00:16:29.268 "auth": { 00:16:29.268 "state": "completed", 00:16:29.268 "digest": "sha256", 00:16:29.268 "dhgroup": "ffdhe8192" 00:16:29.268 } 00:16:29.268 } 00:16:29.268 ]' 00:16:29.268 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.527 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.527 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.527 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:29.527 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.527 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.527 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.527 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.784 22:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.414 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.980 00:16:30.980 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.980 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.980 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.980 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.980 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.980 22:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.980 22:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.238 22:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.238 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.238 { 00:16:31.238 "cntlid": 43, 00:16:31.238 "qid": 0, 00:16:31.238 "state": "enabled", 00:16:31.238 "thread": "nvmf_tgt_poll_group_000", 00:16:31.238 "listen_address": { 00:16:31.238 "trtype": "TCP", 00:16:31.238 "adrfam": "IPv4", 00:16:31.238 "traddr": "10.0.0.2", 00:16:31.238 "trsvcid": "4420" 00:16:31.238 }, 00:16:31.238 "peer_address": { 00:16:31.238 "trtype": "TCP", 00:16:31.238 "adrfam": "IPv4", 00:16:31.238 "traddr": "10.0.0.1", 00:16:31.238 "trsvcid": "46532" 00:16:31.238 }, 00:16:31.238 "auth": { 00:16:31.238 "state": "completed", 00:16:31.238 "digest": "sha256", 00:16:31.238 "dhgroup": "ffdhe8192" 00:16:31.238 } 00:16:31.238 } 00:16:31.238 ]' 00:16:31.238 22:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.238 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.238 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.238 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:31.238 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.238 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.238 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.238 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.496 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:16:32.061 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.061 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.061 22:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.061 22:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.061 22:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.061 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.061 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:32.061 22:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:32.061 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:32.061 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.061 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.061 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:32.061 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:32.061 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.061 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.061 22:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.061 22:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.061 22:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.061 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.061 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.628 00:16:32.628 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.628 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.628 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.887 { 00:16:32.887 "cntlid": 45, 00:16:32.887 "qid": 0, 00:16:32.887 "state": "enabled", 00:16:32.887 "thread": "nvmf_tgt_poll_group_000", 00:16:32.887 "listen_address": { 00:16:32.887 "trtype": "TCP", 00:16:32.887 "adrfam": "IPv4", 00:16:32.887 "traddr": "10.0.0.2", 00:16:32.887 "trsvcid": "4420" 00:16:32.887 }, 00:16:32.887 "peer_address": { 00:16:32.887 "trtype": "TCP", 00:16:32.887 "adrfam": "IPv4", 00:16:32.887 "traddr": "10.0.0.1", 00:16:32.887 "trsvcid": "46554" 00:16:32.887 }, 00:16:32.887 "auth": { 00:16:32.887 "state": "completed", 00:16:32.887 "digest": "sha256", 00:16:32.887 "dhgroup": "ffdhe8192" 00:16:32.887 } 00:16:32.887 } 00:16:32.887 ]' 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.887 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.146 22:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:16:33.712 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.712 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.712 22:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.712 22:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.712 22:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.712 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.712 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:33.712 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:33.970 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:33.970 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.970 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.970 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:33.970 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:33.970 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.970 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:33.970 22:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.970 22:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.970 22:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.970 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.970 22:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.228 00:16:34.486 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.486 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.486 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.486 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.486 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.486 22:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.486 22:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.486 22:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.486 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.486 { 00:16:34.486 "cntlid": 47, 00:16:34.486 "qid": 0, 00:16:34.486 "state": "enabled", 00:16:34.486 "thread": "nvmf_tgt_poll_group_000", 00:16:34.486 "listen_address": { 00:16:34.486 "trtype": "TCP", 00:16:34.486 "adrfam": "IPv4", 00:16:34.486 "traddr": "10.0.0.2", 00:16:34.486 "trsvcid": "4420" 00:16:34.486 }, 00:16:34.486 "peer_address": { 00:16:34.486 "trtype": "TCP", 00:16:34.486 "adrfam": "IPv4", 00:16:34.486 "traddr": "10.0.0.1", 00:16:34.486 "trsvcid": "38404" 00:16:34.486 }, 00:16:34.486 "auth": { 00:16:34.486 "state": "completed", 00:16:34.486 "digest": "sha256", 00:16:34.486 "dhgroup": "ffdhe8192" 00:16:34.486 } 00:16:34.486 } 00:16:34.486 ]' 00:16:34.486 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.744 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.744 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.744 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:34.744 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.744 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.744 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.744 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.744 22:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:16:35.309 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.309 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.309 22:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.309 22:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.568 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.825 00:16:35.825 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.825 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.825 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.083 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.083 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.083 22:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.083 22:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.083 22:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.083 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.083 { 00:16:36.083 "cntlid": 49, 00:16:36.083 "qid": 0, 00:16:36.083 "state": "enabled", 00:16:36.083 "thread": "nvmf_tgt_poll_group_000", 00:16:36.083 "listen_address": { 00:16:36.083 "trtype": "TCP", 00:16:36.083 "adrfam": "IPv4", 00:16:36.083 "traddr": "10.0.0.2", 00:16:36.083 "trsvcid": "4420" 00:16:36.083 }, 00:16:36.083 "peer_address": { 00:16:36.083 "trtype": "TCP", 00:16:36.083 "adrfam": "IPv4", 00:16:36.083 "traddr": "10.0.0.1", 00:16:36.083 "trsvcid": "38426" 00:16:36.083 }, 00:16:36.083 "auth": { 00:16:36.083 "state": "completed", 00:16:36.083 "digest": "sha384", 00:16:36.083 "dhgroup": "null" 00:16:36.083 } 00:16:36.083 } 00:16:36.083 ]' 00:16:36.083 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.083 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.083 22:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.083 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:36.083 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.083 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.083 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.083 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.341 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:16:36.906 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.906 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.906 22:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.906 22:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.906 22:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.906 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.906 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:36.906 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:37.165 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:37.165 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.165 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:37.165 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:37.165 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:37.165 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.165 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.165 22:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.165 22:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.165 22:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.165 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.165 22:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.424 00:16:37.424 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.424 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.424 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.681 { 00:16:37.681 "cntlid": 51, 00:16:37.681 "qid": 0, 00:16:37.681 "state": "enabled", 00:16:37.681 "thread": "nvmf_tgt_poll_group_000", 00:16:37.681 "listen_address": { 00:16:37.681 "trtype": "TCP", 00:16:37.681 "adrfam": "IPv4", 00:16:37.681 "traddr": "10.0.0.2", 00:16:37.681 "trsvcid": "4420" 00:16:37.681 }, 00:16:37.681 "peer_address": { 00:16:37.681 "trtype": "TCP", 00:16:37.681 "adrfam": "IPv4", 00:16:37.681 "traddr": "10.0.0.1", 00:16:37.681 "trsvcid": "38438" 00:16:37.681 }, 00:16:37.681 "auth": { 00:16:37.681 "state": "completed", 00:16:37.681 "digest": "sha384", 00:16:37.681 "dhgroup": "null" 00:16:37.681 } 00:16:37.681 } 00:16:37.681 ]' 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.681 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.938 22:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:16:38.503 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.503 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.503 22:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.503 22:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.503 22:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.503 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.503 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:38.503 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:38.760 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.761 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.761 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.018 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.018 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.018 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.018 22:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.018 22:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.018 22:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.018 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.018 { 00:16:39.018 "cntlid": 53, 00:16:39.018 "qid": 0, 00:16:39.018 "state": "enabled", 00:16:39.018 "thread": "nvmf_tgt_poll_group_000", 00:16:39.018 "listen_address": { 00:16:39.018 "trtype": "TCP", 00:16:39.018 "adrfam": "IPv4", 00:16:39.018 "traddr": "10.0.0.2", 00:16:39.018 "trsvcid": "4420" 00:16:39.018 }, 00:16:39.018 "peer_address": { 00:16:39.018 "trtype": "TCP", 00:16:39.018 "adrfam": "IPv4", 00:16:39.018 "traddr": "10.0.0.1", 00:16:39.018 "trsvcid": "38470" 00:16:39.018 }, 00:16:39.018 "auth": { 00:16:39.018 "state": "completed", 00:16:39.018 "digest": "sha384", 00:16:39.018 "dhgroup": "null" 00:16:39.018 } 00:16:39.018 } 00:16:39.018 ]' 00:16:39.018 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.018 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.018 22:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.276 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:39.276 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.276 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.276 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.276 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.277 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:16:39.839 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.839 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.839 22:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.839 22:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.839 22:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.839 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.839 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:39.839 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:40.097 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:40.097 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.097 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:40.097 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:40.097 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:40.097 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.097 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:40.097 22:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.097 22:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.097 22:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.097 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.097 22:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.354 00:16:40.354 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.354 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.354 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.612 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.612 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.612 22:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.612 22:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.612 22:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.612 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.612 { 00:16:40.612 "cntlid": 55, 00:16:40.612 "qid": 0, 00:16:40.612 "state": "enabled", 00:16:40.612 "thread": "nvmf_tgt_poll_group_000", 00:16:40.613 "listen_address": { 00:16:40.613 "trtype": "TCP", 00:16:40.613 "adrfam": "IPv4", 00:16:40.613 "traddr": "10.0.0.2", 00:16:40.613 "trsvcid": "4420" 00:16:40.613 }, 00:16:40.613 "peer_address": { 00:16:40.613 "trtype": "TCP", 00:16:40.613 "adrfam": "IPv4", 00:16:40.613 "traddr": "10.0.0.1", 00:16:40.613 "trsvcid": "38502" 00:16:40.613 }, 00:16:40.613 "auth": { 00:16:40.613 "state": "completed", 00:16:40.613 "digest": "sha384", 00:16:40.613 "dhgroup": "null" 00:16:40.613 } 00:16:40.613 } 00:16:40.613 ]' 00:16:40.613 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.613 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.613 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.613 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:40.613 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.613 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.613 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.613 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.871 22:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:16:41.436 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.436 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.436 22:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.436 22:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.436 22:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.436 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.436 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.436 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:41.436 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:41.693 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:41.693 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.694 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:41.694 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:41.694 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:41.694 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.694 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.694 22:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.694 22:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.694 22:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.694 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.694 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.951 00:16:41.951 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.951 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.951 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.951 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.951 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.951 22:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.951 22:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.951 22:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.951 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.951 { 00:16:41.951 "cntlid": 57, 00:16:41.951 "qid": 0, 00:16:41.951 "state": "enabled", 00:16:41.951 "thread": "nvmf_tgt_poll_group_000", 00:16:41.951 "listen_address": { 00:16:41.951 "trtype": "TCP", 00:16:41.951 "adrfam": "IPv4", 00:16:41.951 "traddr": "10.0.0.2", 00:16:41.951 "trsvcid": "4420" 00:16:41.951 }, 00:16:41.951 "peer_address": { 00:16:41.951 "trtype": "TCP", 00:16:41.951 "adrfam": "IPv4", 00:16:41.951 "traddr": "10.0.0.1", 00:16:41.951 "trsvcid": "38534" 00:16:41.951 }, 00:16:41.951 "auth": { 00:16:41.951 "state": "completed", 00:16:41.951 "digest": "sha384", 00:16:41.951 "dhgroup": "ffdhe2048" 00:16:41.951 } 00:16:41.951 } 00:16:41.951 ]' 00:16:42.209 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.209 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.209 22:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.209 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.209 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.209 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.209 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.209 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.466 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.032 22:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.289 22:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.289 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.290 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.290 00:16:43.290 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.290 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.290 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.547 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.547 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.547 22:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.547 22:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.547 22:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.547 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.547 { 00:16:43.547 "cntlid": 59, 00:16:43.547 "qid": 0, 00:16:43.547 "state": "enabled", 00:16:43.547 "thread": "nvmf_tgt_poll_group_000", 00:16:43.547 "listen_address": { 00:16:43.547 "trtype": "TCP", 00:16:43.547 "adrfam": "IPv4", 00:16:43.547 "traddr": "10.0.0.2", 00:16:43.547 "trsvcid": "4420" 00:16:43.547 }, 00:16:43.547 "peer_address": { 00:16:43.547 "trtype": "TCP", 00:16:43.547 "adrfam": "IPv4", 00:16:43.547 "traddr": "10.0.0.1", 00:16:43.547 "trsvcid": "50240" 00:16:43.547 }, 00:16:43.547 "auth": { 00:16:43.547 "state": "completed", 00:16:43.547 "digest": "sha384", 00:16:43.547 "dhgroup": "ffdhe2048" 00:16:43.547 } 00:16:43.547 } 00:16:43.547 ]' 00:16:43.547 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.547 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.547 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.804 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.804 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.804 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.804 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.804 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.804 22:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:16:44.369 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.369 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.369 22:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.369 22:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.721 22:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.721 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.721 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.721 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:44.721 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:44.721 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.722 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:44.722 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:44.722 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:44.722 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.722 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.722 22:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.722 22:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.722 22:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.722 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.722 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.980 00:16:44.980 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.980 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.980 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.238 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.238 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.238 22:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.238 22:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.238 22:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.238 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.238 { 00:16:45.238 "cntlid": 61, 00:16:45.238 "qid": 0, 00:16:45.238 "state": "enabled", 00:16:45.238 "thread": "nvmf_tgt_poll_group_000", 00:16:45.238 "listen_address": { 00:16:45.238 "trtype": "TCP", 00:16:45.238 "adrfam": "IPv4", 00:16:45.238 "traddr": "10.0.0.2", 00:16:45.238 "trsvcid": "4420" 00:16:45.238 }, 00:16:45.238 "peer_address": { 00:16:45.238 "trtype": "TCP", 00:16:45.238 "adrfam": "IPv4", 00:16:45.238 "traddr": "10.0.0.1", 00:16:45.238 "trsvcid": "50260" 00:16:45.238 }, 00:16:45.238 "auth": { 00:16:45.238 "state": "completed", 00:16:45.238 "digest": "sha384", 00:16:45.238 "dhgroup": "ffdhe2048" 00:16:45.238 } 00:16:45.238 } 00:16:45.238 ]' 00:16:45.238 22:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.238 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:45.238 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.238 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.238 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.238 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.238 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.238 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.497 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:16:46.065 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.065 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.065 22:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.065 22:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.065 22:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.065 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.065 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.065 22:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:46.065 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:46.065 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.065 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:46.065 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:46.065 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.065 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.065 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:46.065 22:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.065 22:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.323 22:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.323 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.323 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.323 00:16:46.323 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.323 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.323 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.582 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.582 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.582 22:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.582 22:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.582 22:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.582 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.582 { 00:16:46.582 "cntlid": 63, 00:16:46.582 "qid": 0, 00:16:46.582 "state": "enabled", 00:16:46.582 "thread": "nvmf_tgt_poll_group_000", 00:16:46.582 "listen_address": { 00:16:46.582 "trtype": "TCP", 00:16:46.582 "adrfam": "IPv4", 00:16:46.582 "traddr": "10.0.0.2", 00:16:46.582 "trsvcid": "4420" 00:16:46.582 }, 00:16:46.582 "peer_address": { 00:16:46.582 "trtype": "TCP", 00:16:46.582 "adrfam": "IPv4", 00:16:46.582 "traddr": "10.0.0.1", 00:16:46.582 "trsvcid": "50300" 00:16:46.582 }, 00:16:46.582 "auth": { 00:16:46.582 "state": "completed", 00:16:46.582 "digest": "sha384", 00:16:46.582 "dhgroup": "ffdhe2048" 00:16:46.582 } 00:16:46.582 } 00:16:46.582 ]' 00:16:46.582 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.582 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.582 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.582 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.582 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.841 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.841 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.842 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.842 22:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:16:47.411 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.411 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.411 22:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.411 22:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.411 22:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.411 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.411 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.411 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:47.411 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:47.671 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:47.671 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.671 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:47.671 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:47.671 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.671 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.671 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.671 22:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.671 22:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.671 22:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.671 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.671 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.930 00:16:47.930 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.930 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.930 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.189 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.189 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.189 22:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.189 22:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.189 22:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.189 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.189 { 00:16:48.189 "cntlid": 65, 00:16:48.189 "qid": 0, 00:16:48.189 "state": "enabled", 00:16:48.189 "thread": "nvmf_tgt_poll_group_000", 00:16:48.189 "listen_address": { 00:16:48.189 "trtype": "TCP", 00:16:48.189 "adrfam": "IPv4", 00:16:48.189 "traddr": "10.0.0.2", 00:16:48.189 "trsvcid": "4420" 00:16:48.189 }, 00:16:48.189 "peer_address": { 00:16:48.189 "trtype": "TCP", 00:16:48.189 "adrfam": "IPv4", 00:16:48.189 "traddr": "10.0.0.1", 00:16:48.189 "trsvcid": "50348" 00:16:48.189 }, 00:16:48.189 "auth": { 00:16:48.189 "state": "completed", 00:16:48.189 "digest": "sha384", 00:16:48.189 "dhgroup": "ffdhe3072" 00:16:48.189 } 00:16:48.189 } 00:16:48.189 ]' 00:16:48.189 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.189 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.189 22:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.189 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.189 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.190 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.190 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.190 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.449 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:16:49.018 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.018 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.018 22:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.018 22:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.018 22:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.018 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.018 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:49.018 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:49.278 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:49.278 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.278 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:49.278 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:49.278 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.278 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.278 22:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.278 22:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.278 22:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.278 22:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.278 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.278 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.278 00:16:49.278 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.278 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.278 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.536 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.536 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.536 22:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.536 22:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.536 22:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.536 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.536 { 00:16:49.536 "cntlid": 67, 00:16:49.536 "qid": 0, 00:16:49.536 "state": "enabled", 00:16:49.536 "thread": "nvmf_tgt_poll_group_000", 00:16:49.536 "listen_address": { 00:16:49.536 "trtype": "TCP", 00:16:49.536 "adrfam": "IPv4", 00:16:49.536 "traddr": "10.0.0.2", 00:16:49.536 "trsvcid": "4420" 00:16:49.536 }, 00:16:49.536 "peer_address": { 00:16:49.536 "trtype": "TCP", 00:16:49.536 "adrfam": "IPv4", 00:16:49.536 "traddr": "10.0.0.1", 00:16:49.536 "trsvcid": "50376" 00:16:49.536 }, 00:16:49.536 "auth": { 00:16:49.536 "state": "completed", 00:16:49.536 "digest": "sha384", 00:16:49.536 "dhgroup": "ffdhe3072" 00:16:49.536 } 00:16:49.536 } 00:16:49.536 ]' 00:16:49.536 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.536 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.536 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.794 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:49.794 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.794 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.794 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.794 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.794 22:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:16:50.361 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.361 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.361 22:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.361 22:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.361 22:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.361 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.361 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:50.361 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:50.620 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:50.620 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.620 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:50.620 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:50.620 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:50.620 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.620 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.620 22:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.620 22:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.620 22:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.620 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.620 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.878 00:16:50.878 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.878 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.878 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.137 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.137 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.137 22:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.137 22:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.137 22:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.137 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.137 { 00:16:51.138 "cntlid": 69, 00:16:51.138 "qid": 0, 00:16:51.138 "state": "enabled", 00:16:51.138 "thread": "nvmf_tgt_poll_group_000", 00:16:51.138 "listen_address": { 00:16:51.138 "trtype": "TCP", 00:16:51.138 "adrfam": "IPv4", 00:16:51.138 "traddr": "10.0.0.2", 00:16:51.138 "trsvcid": "4420" 00:16:51.138 }, 00:16:51.138 "peer_address": { 00:16:51.138 "trtype": "TCP", 00:16:51.138 "adrfam": "IPv4", 00:16:51.138 "traddr": "10.0.0.1", 00:16:51.138 "trsvcid": "50404" 00:16:51.138 }, 00:16:51.138 "auth": { 00:16:51.138 "state": "completed", 00:16:51.138 "digest": "sha384", 00:16:51.138 "dhgroup": "ffdhe3072" 00:16:51.138 } 00:16:51.138 } 00:16:51.138 ]' 00:16:51.138 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.138 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.138 22:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.138 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.138 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.138 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.138 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.138 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.396 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:16:51.965 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.965 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.965 22:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.965 22:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.965 22:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.965 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.965 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:51.965 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:52.224 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:52.224 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.224 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:52.224 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:52.224 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:52.224 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.224 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:52.224 22:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.224 22:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.224 22:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.224 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.224 22:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.483 00:16:52.483 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.483 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.483 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.483 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.483 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.483 22:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.483 22:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.483 22:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.483 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.483 { 00:16:52.483 "cntlid": 71, 00:16:52.483 "qid": 0, 00:16:52.483 "state": "enabled", 00:16:52.483 "thread": "nvmf_tgt_poll_group_000", 00:16:52.483 "listen_address": { 00:16:52.483 "trtype": "TCP", 00:16:52.483 "adrfam": "IPv4", 00:16:52.483 "traddr": "10.0.0.2", 00:16:52.483 "trsvcid": "4420" 00:16:52.483 }, 00:16:52.483 "peer_address": { 00:16:52.483 "trtype": "TCP", 00:16:52.483 "adrfam": "IPv4", 00:16:52.483 "traddr": "10.0.0.1", 00:16:52.483 "trsvcid": "50428" 00:16:52.483 }, 00:16:52.483 "auth": { 00:16:52.483 "state": "completed", 00:16:52.483 "digest": "sha384", 00:16:52.483 "dhgroup": "ffdhe3072" 00:16:52.483 } 00:16:52.483 } 00:16:52.483 ]' 00:16:52.483 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.742 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.742 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.742 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.742 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.742 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.742 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.742 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.000 22:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.565 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.823 00:16:53.823 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.823 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.823 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.080 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.080 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.080 22:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.080 22:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.080 22:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.080 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.080 { 00:16:54.080 "cntlid": 73, 00:16:54.080 "qid": 0, 00:16:54.080 "state": "enabled", 00:16:54.080 "thread": "nvmf_tgt_poll_group_000", 00:16:54.080 "listen_address": { 00:16:54.080 "trtype": "TCP", 00:16:54.080 "adrfam": "IPv4", 00:16:54.080 "traddr": "10.0.0.2", 00:16:54.080 "trsvcid": "4420" 00:16:54.080 }, 00:16:54.080 "peer_address": { 00:16:54.080 "trtype": "TCP", 00:16:54.080 "adrfam": "IPv4", 00:16:54.080 "traddr": "10.0.0.1", 00:16:54.080 "trsvcid": "45552" 00:16:54.080 }, 00:16:54.080 "auth": { 00:16:54.080 "state": "completed", 00:16:54.080 "digest": "sha384", 00:16:54.080 "dhgroup": "ffdhe4096" 00:16:54.080 } 00:16:54.080 } 00:16:54.080 ]' 00:16:54.080 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.080 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.080 22:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.080 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.081 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.338 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.338 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.338 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.339 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:16:54.906 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.906 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.906 22:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.906 22:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.906 22:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.906 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.906 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.906 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:55.165 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:55.165 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.165 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:55.165 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:55.165 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:55.165 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.165 22:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.165 22:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.165 22:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.165 22:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.165 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.165 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.424 00:16:55.424 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.424 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.424 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.683 { 00:16:55.683 "cntlid": 75, 00:16:55.683 "qid": 0, 00:16:55.683 "state": "enabled", 00:16:55.683 "thread": "nvmf_tgt_poll_group_000", 00:16:55.683 "listen_address": { 00:16:55.683 "trtype": "TCP", 00:16:55.683 "adrfam": "IPv4", 00:16:55.683 "traddr": "10.0.0.2", 00:16:55.683 "trsvcid": "4420" 00:16:55.683 }, 00:16:55.683 "peer_address": { 00:16:55.683 "trtype": "TCP", 00:16:55.683 "adrfam": "IPv4", 00:16:55.683 "traddr": "10.0.0.1", 00:16:55.683 "trsvcid": "45574" 00:16:55.683 }, 00:16:55.683 "auth": { 00:16:55.683 "state": "completed", 00:16:55.683 "digest": "sha384", 00:16:55.683 "dhgroup": "ffdhe4096" 00:16:55.683 } 00:16:55.683 } 00:16:55.683 ]' 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.683 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.941 22:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:16:56.509 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.509 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.509 22:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.509 22:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.509 22:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.509 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.509 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:56.509 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:56.768 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:56.768 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.768 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:56.768 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:56.768 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:56.768 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.768 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.768 22:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.768 22:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.768 22:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.768 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.769 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.028 00:16:57.028 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.028 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.028 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.028 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.028 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.028 22:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.028 22:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.028 22:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.028 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.028 { 00:16:57.028 "cntlid": 77, 00:16:57.028 "qid": 0, 00:16:57.028 "state": "enabled", 00:16:57.028 "thread": "nvmf_tgt_poll_group_000", 00:16:57.028 "listen_address": { 00:16:57.028 "trtype": "TCP", 00:16:57.028 "adrfam": "IPv4", 00:16:57.028 "traddr": "10.0.0.2", 00:16:57.028 "trsvcid": "4420" 00:16:57.028 }, 00:16:57.028 "peer_address": { 00:16:57.028 "trtype": "TCP", 00:16:57.028 "adrfam": "IPv4", 00:16:57.028 "traddr": "10.0.0.1", 00:16:57.028 "trsvcid": "45602" 00:16:57.028 }, 00:16:57.028 "auth": { 00:16:57.028 "state": "completed", 00:16:57.028 "digest": "sha384", 00:16:57.028 "dhgroup": "ffdhe4096" 00:16:57.028 } 00:16:57.028 } 00:16:57.028 ]' 00:16:57.028 22:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.287 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.287 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.287 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.287 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.287 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.287 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.287 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.287 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:16:57.853 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.111 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.111 22:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.111 22:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.111 22:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.111 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.111 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:58.111 22:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:58.111 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:58.111 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.111 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:58.111 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:58.111 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:58.111 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.111 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:58.111 22:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.111 22:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.112 22:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.112 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.112 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.370 00:16:58.370 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.370 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.370 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.629 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.629 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.629 22:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.629 22:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.629 22:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.629 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.629 { 00:16:58.629 "cntlid": 79, 00:16:58.629 "qid": 0, 00:16:58.629 "state": "enabled", 00:16:58.629 "thread": "nvmf_tgt_poll_group_000", 00:16:58.629 "listen_address": { 00:16:58.629 "trtype": "TCP", 00:16:58.629 "adrfam": "IPv4", 00:16:58.629 "traddr": "10.0.0.2", 00:16:58.629 "trsvcid": "4420" 00:16:58.629 }, 00:16:58.629 "peer_address": { 00:16:58.629 "trtype": "TCP", 00:16:58.629 "adrfam": "IPv4", 00:16:58.629 "traddr": "10.0.0.1", 00:16:58.629 "trsvcid": "45630" 00:16:58.629 }, 00:16:58.629 "auth": { 00:16:58.629 "state": "completed", 00:16:58.629 "digest": "sha384", 00:16:58.629 "dhgroup": "ffdhe4096" 00:16:58.629 } 00:16:58.629 } 00:16:58.629 ]' 00:16:58.629 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.629 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.629 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.629 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.629 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.898 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.898 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.898 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.898 22:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:16:59.508 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.508 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.508 22:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.508 22:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.508 22:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.508 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.508 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.508 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:59.508 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:59.770 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:59.770 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.770 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:59.770 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:59.770 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:59.770 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.770 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.770 22:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.770 22:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.770 22:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.770 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.770 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.028 00:17:00.028 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.028 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.028 22:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.286 { 00:17:00.286 "cntlid": 81, 00:17:00.286 "qid": 0, 00:17:00.286 "state": "enabled", 00:17:00.286 "thread": "nvmf_tgt_poll_group_000", 00:17:00.286 "listen_address": { 00:17:00.286 "trtype": "TCP", 00:17:00.286 "adrfam": "IPv4", 00:17:00.286 "traddr": "10.0.0.2", 00:17:00.286 "trsvcid": "4420" 00:17:00.286 }, 00:17:00.286 "peer_address": { 00:17:00.286 "trtype": "TCP", 00:17:00.286 "adrfam": "IPv4", 00:17:00.286 "traddr": "10.0.0.1", 00:17:00.286 "trsvcid": "45656" 00:17:00.286 }, 00:17:00.286 "auth": { 00:17:00.286 "state": "completed", 00:17:00.286 "digest": "sha384", 00:17:00.286 "dhgroup": "ffdhe6144" 00:17:00.286 } 00:17:00.286 } 00:17:00.286 ]' 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.286 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.544 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:17:01.157 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.157 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.157 22:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.157 22:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.157 22:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.157 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.157 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:01.157 22:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:01.416 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:01.416 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.416 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:01.416 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:01.416 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:01.416 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.416 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.416 22:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.416 22:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.416 22:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.416 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.416 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.674 00:17:01.674 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.674 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.674 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.932 { 00:17:01.932 "cntlid": 83, 00:17:01.932 "qid": 0, 00:17:01.932 "state": "enabled", 00:17:01.932 "thread": "nvmf_tgt_poll_group_000", 00:17:01.932 "listen_address": { 00:17:01.932 "trtype": "TCP", 00:17:01.932 "adrfam": "IPv4", 00:17:01.932 "traddr": "10.0.0.2", 00:17:01.932 "trsvcid": "4420" 00:17:01.932 }, 00:17:01.932 "peer_address": { 00:17:01.932 "trtype": "TCP", 00:17:01.932 "adrfam": "IPv4", 00:17:01.932 "traddr": "10.0.0.1", 00:17:01.932 "trsvcid": "45688" 00:17:01.932 }, 00:17:01.932 "auth": { 00:17:01.932 "state": "completed", 00:17:01.932 "digest": "sha384", 00:17:01.932 "dhgroup": "ffdhe6144" 00:17:01.932 } 00:17:01.932 } 00:17:01.932 ]' 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.932 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.190 22:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:17:02.756 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.756 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.756 22:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.756 22:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.756 22:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.756 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.756 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:02.756 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.016 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:03.016 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.016 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.016 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:03.016 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:03.016 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.016 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.016 22:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.016 22:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.016 22:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.016 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.016 22:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.275 00:17:03.275 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.275 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.275 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.534 { 00:17:03.534 "cntlid": 85, 00:17:03.534 "qid": 0, 00:17:03.534 "state": "enabled", 00:17:03.534 "thread": "nvmf_tgt_poll_group_000", 00:17:03.534 "listen_address": { 00:17:03.534 "trtype": "TCP", 00:17:03.534 "adrfam": "IPv4", 00:17:03.534 "traddr": "10.0.0.2", 00:17:03.534 "trsvcid": "4420" 00:17:03.534 }, 00:17:03.534 "peer_address": { 00:17:03.534 "trtype": "TCP", 00:17:03.534 "adrfam": "IPv4", 00:17:03.534 "traddr": "10.0.0.1", 00:17:03.534 "trsvcid": "47676" 00:17:03.534 }, 00:17:03.534 "auth": { 00:17:03.534 "state": "completed", 00:17:03.534 "digest": "sha384", 00:17:03.534 "dhgroup": "ffdhe6144" 00:17:03.534 } 00:17:03.534 } 00:17:03.534 ]' 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.534 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.794 22:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:17:04.362 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.362 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.362 22:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.362 22:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.362 22:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.362 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.362 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:04.362 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:04.620 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:04.620 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.620 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:04.620 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:04.620 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:04.620 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.621 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:04.621 22:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.621 22:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.621 22:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.621 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.621 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.880 00:17:04.880 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.880 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.880 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.139 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.139 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.139 22:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.139 22:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.139 22:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.139 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.139 { 00:17:05.139 "cntlid": 87, 00:17:05.139 "qid": 0, 00:17:05.139 "state": "enabled", 00:17:05.139 "thread": "nvmf_tgt_poll_group_000", 00:17:05.139 "listen_address": { 00:17:05.139 "trtype": "TCP", 00:17:05.139 "adrfam": "IPv4", 00:17:05.139 "traddr": "10.0.0.2", 00:17:05.139 "trsvcid": "4420" 00:17:05.139 }, 00:17:05.139 "peer_address": { 00:17:05.139 "trtype": "TCP", 00:17:05.139 "adrfam": "IPv4", 00:17:05.139 "traddr": "10.0.0.1", 00:17:05.139 "trsvcid": "47694" 00:17:05.139 }, 00:17:05.139 "auth": { 00:17:05.139 "state": "completed", 00:17:05.139 "digest": "sha384", 00:17:05.139 "dhgroup": "ffdhe6144" 00:17:05.139 } 00:17:05.139 } 00:17:05.139 ]' 00:17:05.139 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.139 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.139 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.139 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:05.139 22:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.139 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.139 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.139 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.398 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:17:05.966 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.966 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.966 22:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.966 22:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.966 22:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.966 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.966 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.966 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:05.966 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:06.225 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:06.225 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.225 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.225 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:06.225 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:06.225 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.225 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.225 22:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.225 22:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.225 22:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.225 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.225 22:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.484 00:17:06.484 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.484 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.484 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.743 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.743 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.743 22:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.743 22:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.743 22:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.743 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.743 { 00:17:06.743 "cntlid": 89, 00:17:06.743 "qid": 0, 00:17:06.743 "state": "enabled", 00:17:06.743 "thread": "nvmf_tgt_poll_group_000", 00:17:06.743 "listen_address": { 00:17:06.743 "trtype": "TCP", 00:17:06.743 "adrfam": "IPv4", 00:17:06.743 "traddr": "10.0.0.2", 00:17:06.743 "trsvcid": "4420" 00:17:06.743 }, 00:17:06.743 "peer_address": { 00:17:06.743 "trtype": "TCP", 00:17:06.743 "adrfam": "IPv4", 00:17:06.743 "traddr": "10.0.0.1", 00:17:06.743 "trsvcid": "47728" 00:17:06.743 }, 00:17:06.743 "auth": { 00:17:06.743 "state": "completed", 00:17:06.743 "digest": "sha384", 00:17:06.743 "dhgroup": "ffdhe8192" 00:17:06.743 } 00:17:06.743 } 00:17:06.743 ]' 00:17:06.743 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.743 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.743 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.743 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.743 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.002 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.002 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.002 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.002 22:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:17:07.569 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.569 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.569 22:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.569 22:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.569 22:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.569 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.569 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:07.569 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:07.829 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:07.829 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.829 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:07.829 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:07.829 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:07.829 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.829 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.829 22:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.829 22:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.829 22:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.829 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.829 22:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.396 00:17:08.396 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.396 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.396 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.396 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.396 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.396 22:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.396 22:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.396 22:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.396 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.396 { 00:17:08.396 "cntlid": 91, 00:17:08.396 "qid": 0, 00:17:08.396 "state": "enabled", 00:17:08.396 "thread": "nvmf_tgt_poll_group_000", 00:17:08.396 "listen_address": { 00:17:08.396 "trtype": "TCP", 00:17:08.396 "adrfam": "IPv4", 00:17:08.396 "traddr": "10.0.0.2", 00:17:08.396 "trsvcid": "4420" 00:17:08.396 }, 00:17:08.396 "peer_address": { 00:17:08.396 "trtype": "TCP", 00:17:08.396 "adrfam": "IPv4", 00:17:08.396 "traddr": "10.0.0.1", 00:17:08.396 "trsvcid": "47758" 00:17:08.396 }, 00:17:08.396 "auth": { 00:17:08.396 "state": "completed", 00:17:08.396 "digest": "sha384", 00:17:08.396 "dhgroup": "ffdhe8192" 00:17:08.396 } 00:17:08.396 } 00:17:08.396 ]' 00:17:08.396 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.656 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.656 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.656 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:08.656 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.656 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.656 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.656 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.656 22:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:17:09.225 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.225 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.225 22:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.225 22:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.225 22:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.225 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.225 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:09.225 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:09.484 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:09.484 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.484 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.484 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:09.484 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:09.484 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.484 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.484 22:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.484 22:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.484 22:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.484 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.484 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.050 00:17:10.050 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.050 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.050 22:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.309 { 00:17:10.309 "cntlid": 93, 00:17:10.309 "qid": 0, 00:17:10.309 "state": "enabled", 00:17:10.309 "thread": "nvmf_tgt_poll_group_000", 00:17:10.309 "listen_address": { 00:17:10.309 "trtype": "TCP", 00:17:10.309 "adrfam": "IPv4", 00:17:10.309 "traddr": "10.0.0.2", 00:17:10.309 "trsvcid": "4420" 00:17:10.309 }, 00:17:10.309 "peer_address": { 00:17:10.309 "trtype": "TCP", 00:17:10.309 "adrfam": "IPv4", 00:17:10.309 "traddr": "10.0.0.1", 00:17:10.309 "trsvcid": "47788" 00:17:10.309 }, 00:17:10.309 "auth": { 00:17:10.309 "state": "completed", 00:17:10.309 "digest": "sha384", 00:17:10.309 "dhgroup": "ffdhe8192" 00:17:10.309 } 00:17:10.309 } 00:17:10.309 ]' 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.309 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.568 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:17:11.136 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.136 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.136 22:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.136 22:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.136 22:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.136 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.136 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:11.136 22:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:11.395 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:11.395 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.395 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.395 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:11.395 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:11.395 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.395 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:11.395 22:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.395 22:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.395 22:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.395 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.395 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.655 00:17:11.655 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.655 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.655 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.914 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.914 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.914 22:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.914 22:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.914 22:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.914 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.914 { 00:17:11.914 "cntlid": 95, 00:17:11.914 "qid": 0, 00:17:11.914 "state": "enabled", 00:17:11.914 "thread": "nvmf_tgt_poll_group_000", 00:17:11.914 "listen_address": { 00:17:11.914 "trtype": "TCP", 00:17:11.914 "adrfam": "IPv4", 00:17:11.914 "traddr": "10.0.0.2", 00:17:11.914 "trsvcid": "4420" 00:17:11.914 }, 00:17:11.914 "peer_address": { 00:17:11.914 "trtype": "TCP", 00:17:11.914 "adrfam": "IPv4", 00:17:11.914 "traddr": "10.0.0.1", 00:17:11.914 "trsvcid": "47806" 00:17:11.914 }, 00:17:11.914 "auth": { 00:17:11.914 "state": "completed", 00:17:11.914 "digest": "sha384", 00:17:11.914 "dhgroup": "ffdhe8192" 00:17:11.914 } 00:17:11.914 } 00:17:11.914 ]' 00:17:11.914 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.914 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.914 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.914 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.914 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.174 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.174 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.174 22:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.174 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:17:12.741 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.741 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.741 22:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.741 22:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.741 22:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.741 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:12.741 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.741 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.741 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:12.741 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:13.041 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:13.041 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.041 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:13.041 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:13.041 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.041 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.041 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.041 22:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.041 22:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.041 22:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.041 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.041 22:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.299 00:17:13.299 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.299 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.299 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.558 { 00:17:13.558 "cntlid": 97, 00:17:13.558 "qid": 0, 00:17:13.558 "state": "enabled", 00:17:13.558 "thread": "nvmf_tgt_poll_group_000", 00:17:13.558 "listen_address": { 00:17:13.558 "trtype": "TCP", 00:17:13.558 "adrfam": "IPv4", 00:17:13.558 "traddr": "10.0.0.2", 00:17:13.558 "trsvcid": "4420" 00:17:13.558 }, 00:17:13.558 "peer_address": { 00:17:13.558 "trtype": "TCP", 00:17:13.558 "adrfam": "IPv4", 00:17:13.558 "traddr": "10.0.0.1", 00:17:13.558 "trsvcid": "60006" 00:17:13.558 }, 00:17:13.558 "auth": { 00:17:13.558 "state": "completed", 00:17:13.558 "digest": "sha512", 00:17:13.558 "dhgroup": "null" 00:17:13.558 } 00:17:13.558 } 00:17:13.558 ]' 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.558 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.817 22:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.386 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.646 00:17:14.646 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.646 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.646 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.905 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.905 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.905 22:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.905 22:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.905 22:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.905 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.905 { 00:17:14.905 "cntlid": 99, 00:17:14.905 "qid": 0, 00:17:14.905 "state": "enabled", 00:17:14.905 "thread": "nvmf_tgt_poll_group_000", 00:17:14.905 "listen_address": { 00:17:14.905 "trtype": "TCP", 00:17:14.905 "adrfam": "IPv4", 00:17:14.905 "traddr": "10.0.0.2", 00:17:14.905 "trsvcid": "4420" 00:17:14.905 }, 00:17:14.905 "peer_address": { 00:17:14.905 "trtype": "TCP", 00:17:14.905 "adrfam": "IPv4", 00:17:14.905 "traddr": "10.0.0.1", 00:17:14.905 "trsvcid": "60040" 00:17:14.905 }, 00:17:14.905 "auth": { 00:17:14.905 "state": "completed", 00:17:14.905 "digest": "sha512", 00:17:14.905 "dhgroup": "null" 00:17:14.905 } 00:17:14.905 } 00:17:14.905 ]' 00:17:14.905 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.905 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:14.905 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.905 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:14.905 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.164 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.164 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.164 22:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.164 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:17:15.731 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.731 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.731 22:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.731 22:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.731 22:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.731 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.731 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:15.731 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:15.990 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:15.990 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.990 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.990 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:15.990 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:15.990 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.990 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.990 22:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.990 22:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.990 22:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.990 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.991 22:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.250 00:17:16.250 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.250 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.250 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.509 { 00:17:16.509 "cntlid": 101, 00:17:16.509 "qid": 0, 00:17:16.509 "state": "enabled", 00:17:16.509 "thread": "nvmf_tgt_poll_group_000", 00:17:16.509 "listen_address": { 00:17:16.509 "trtype": "TCP", 00:17:16.509 "adrfam": "IPv4", 00:17:16.509 "traddr": "10.0.0.2", 00:17:16.509 "trsvcid": "4420" 00:17:16.509 }, 00:17:16.509 "peer_address": { 00:17:16.509 "trtype": "TCP", 00:17:16.509 "adrfam": "IPv4", 00:17:16.509 "traddr": "10.0.0.1", 00:17:16.509 "trsvcid": "60062" 00:17:16.509 }, 00:17:16.509 "auth": { 00:17:16.509 "state": "completed", 00:17:16.509 "digest": "sha512", 00:17:16.509 "dhgroup": "null" 00:17:16.509 } 00:17:16.509 } 00:17:16.509 ]' 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.509 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.768 22:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.338 22:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.597 22:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.597 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.597 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.597 00:17:17.597 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.597 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.597 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.857 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.857 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.857 22:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.857 22:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.857 22:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.857 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.857 { 00:17:17.857 "cntlid": 103, 00:17:17.857 "qid": 0, 00:17:17.857 "state": "enabled", 00:17:17.857 "thread": "nvmf_tgt_poll_group_000", 00:17:17.857 "listen_address": { 00:17:17.857 "trtype": "TCP", 00:17:17.857 "adrfam": "IPv4", 00:17:17.857 "traddr": "10.0.0.2", 00:17:17.857 "trsvcid": "4420" 00:17:17.857 }, 00:17:17.857 "peer_address": { 00:17:17.857 "trtype": "TCP", 00:17:17.857 "adrfam": "IPv4", 00:17:17.857 "traddr": "10.0.0.1", 00:17:17.857 "trsvcid": "60092" 00:17:17.857 }, 00:17:17.857 "auth": { 00:17:17.857 "state": "completed", 00:17:17.857 "digest": "sha512", 00:17:17.857 "dhgroup": "null" 00:17:17.857 } 00:17:17.857 } 00:17:17.857 ]' 00:17:17.857 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.857 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.857 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.116 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:18.116 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.116 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.116 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.116 22:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.116 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:17:18.684 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.684 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.684 22:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.684 22:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.684 22:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.684 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.684 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.684 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:18.684 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:18.943 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:18.943 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.943 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:18.943 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:18.943 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:18.943 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.943 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.943 22:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.943 22:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.943 22:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.943 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.943 22:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.202 00:17:19.202 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.202 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.202 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.463 { 00:17:19.463 "cntlid": 105, 00:17:19.463 "qid": 0, 00:17:19.463 "state": "enabled", 00:17:19.463 "thread": "nvmf_tgt_poll_group_000", 00:17:19.463 "listen_address": { 00:17:19.463 "trtype": "TCP", 00:17:19.463 "adrfam": "IPv4", 00:17:19.463 "traddr": "10.0.0.2", 00:17:19.463 "trsvcid": "4420" 00:17:19.463 }, 00:17:19.463 "peer_address": { 00:17:19.463 "trtype": "TCP", 00:17:19.463 "adrfam": "IPv4", 00:17:19.463 "traddr": "10.0.0.1", 00:17:19.463 "trsvcid": "60120" 00:17:19.463 }, 00:17:19.463 "auth": { 00:17:19.463 "state": "completed", 00:17:19.463 "digest": "sha512", 00:17:19.463 "dhgroup": "ffdhe2048" 00:17:19.463 } 00:17:19.463 } 00:17:19.463 ]' 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.463 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.721 22:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:17:20.289 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.289 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.289 22:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.289 22:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.289 22:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.289 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.289 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:20.289 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:20.548 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:20.548 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.548 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.548 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:20.548 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:20.548 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.548 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.548 22:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.548 22:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.548 22:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.548 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.548 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.548 00:17:20.807 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.807 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.807 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.807 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.807 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.807 22:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.807 22:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.807 22:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.807 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.807 { 00:17:20.807 "cntlid": 107, 00:17:20.807 "qid": 0, 00:17:20.807 "state": "enabled", 00:17:20.807 "thread": "nvmf_tgt_poll_group_000", 00:17:20.807 "listen_address": { 00:17:20.807 "trtype": "TCP", 00:17:20.807 "adrfam": "IPv4", 00:17:20.807 "traddr": "10.0.0.2", 00:17:20.807 "trsvcid": "4420" 00:17:20.807 }, 00:17:20.807 "peer_address": { 00:17:20.807 "trtype": "TCP", 00:17:20.807 "adrfam": "IPv4", 00:17:20.807 "traddr": "10.0.0.1", 00:17:20.807 "trsvcid": "60142" 00:17:20.807 }, 00:17:20.807 "auth": { 00:17:20.807 "state": "completed", 00:17:20.807 "digest": "sha512", 00:17:20.807 "dhgroup": "ffdhe2048" 00:17:20.807 } 00:17:20.807 } 00:17:20.807 ]' 00:17:20.807 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.807 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.807 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.066 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.066 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.066 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.066 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.066 22:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.066 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:17:21.634 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.634 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.634 22:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.634 22:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.634 22:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.634 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.634 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:21.634 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:21.892 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:21.892 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.892 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:21.892 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:21.892 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:21.892 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.892 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.892 22:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.892 22:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.892 22:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.892 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.892 22:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.151 00:17:22.151 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.151 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.151 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.410 { 00:17:22.410 "cntlid": 109, 00:17:22.410 "qid": 0, 00:17:22.410 "state": "enabled", 00:17:22.410 "thread": "nvmf_tgt_poll_group_000", 00:17:22.410 "listen_address": { 00:17:22.410 "trtype": "TCP", 00:17:22.410 "adrfam": "IPv4", 00:17:22.410 "traddr": "10.0.0.2", 00:17:22.410 "trsvcid": "4420" 00:17:22.410 }, 00:17:22.410 "peer_address": { 00:17:22.410 "trtype": "TCP", 00:17:22.410 "adrfam": "IPv4", 00:17:22.410 "traddr": "10.0.0.1", 00:17:22.410 "trsvcid": "60164" 00:17:22.410 }, 00:17:22.410 "auth": { 00:17:22.410 "state": "completed", 00:17:22.410 "digest": "sha512", 00:17:22.410 "dhgroup": "ffdhe2048" 00:17:22.410 } 00:17:22.410 } 00:17:22.410 ]' 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.410 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.669 22:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:17:23.238 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.238 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.238 22:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.238 22:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.238 22:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.238 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.238 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:23.238 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:23.498 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:23.498 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.498 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.498 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:23.498 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:23.498 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.498 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:23.498 22:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.498 22:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.498 22:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.498 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.498 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.757 00:17:23.757 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.757 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.757 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.757 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.015 { 00:17:24.015 "cntlid": 111, 00:17:24.015 "qid": 0, 00:17:24.015 "state": "enabled", 00:17:24.015 "thread": "nvmf_tgt_poll_group_000", 00:17:24.015 "listen_address": { 00:17:24.015 "trtype": "TCP", 00:17:24.015 "adrfam": "IPv4", 00:17:24.015 "traddr": "10.0.0.2", 00:17:24.015 "trsvcid": "4420" 00:17:24.015 }, 00:17:24.015 "peer_address": { 00:17:24.015 "trtype": "TCP", 00:17:24.015 "adrfam": "IPv4", 00:17:24.015 "traddr": "10.0.0.1", 00:17:24.015 "trsvcid": "33498" 00:17:24.015 }, 00:17:24.015 "auth": { 00:17:24.015 "state": "completed", 00:17:24.015 "digest": "sha512", 00:17:24.015 "dhgroup": "ffdhe2048" 00:17:24.015 } 00:17:24.015 } 00:17:24.015 ]' 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.015 22:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.273 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.840 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.841 22:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.098 00:17:25.098 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.098 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.098 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.357 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.357 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.357 22:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.357 22:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.357 22:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.357 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.357 { 00:17:25.357 "cntlid": 113, 00:17:25.357 "qid": 0, 00:17:25.357 "state": "enabled", 00:17:25.357 "thread": "nvmf_tgt_poll_group_000", 00:17:25.357 "listen_address": { 00:17:25.357 "trtype": "TCP", 00:17:25.357 "adrfam": "IPv4", 00:17:25.357 "traddr": "10.0.0.2", 00:17:25.357 "trsvcid": "4420" 00:17:25.357 }, 00:17:25.357 "peer_address": { 00:17:25.357 "trtype": "TCP", 00:17:25.357 "adrfam": "IPv4", 00:17:25.357 "traddr": "10.0.0.1", 00:17:25.357 "trsvcid": "33530" 00:17:25.357 }, 00:17:25.357 "auth": { 00:17:25.357 "state": "completed", 00:17:25.357 "digest": "sha512", 00:17:25.357 "dhgroup": "ffdhe3072" 00:17:25.357 } 00:17:25.357 } 00:17:25.357 ]' 00:17:25.357 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.357 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.357 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.357 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.357 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.615 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.615 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.615 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.615 22:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:17:26.180 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.180 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.180 22:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.180 22:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.180 22:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.180 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.180 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.180 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.438 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:26.438 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.438 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:26.438 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:26.438 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:26.438 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.438 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.438 22:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.438 22:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.438 22:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.438 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.438 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.706 00:17:26.706 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.706 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.706 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.967 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.967 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.967 22:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.968 22:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.968 22:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.968 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.968 { 00:17:26.968 "cntlid": 115, 00:17:26.968 "qid": 0, 00:17:26.968 "state": "enabled", 00:17:26.968 "thread": "nvmf_tgt_poll_group_000", 00:17:26.968 "listen_address": { 00:17:26.968 "trtype": "TCP", 00:17:26.968 "adrfam": "IPv4", 00:17:26.968 "traddr": "10.0.0.2", 00:17:26.968 "trsvcid": "4420" 00:17:26.968 }, 00:17:26.968 "peer_address": { 00:17:26.968 "trtype": "TCP", 00:17:26.968 "adrfam": "IPv4", 00:17:26.968 "traddr": "10.0.0.1", 00:17:26.968 "trsvcid": "33550" 00:17:26.968 }, 00:17:26.968 "auth": { 00:17:26.968 "state": "completed", 00:17:26.968 "digest": "sha512", 00:17:26.968 "dhgroup": "ffdhe3072" 00:17:26.968 } 00:17:26.968 } 00:17:26.968 ]' 00:17:26.968 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.968 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.968 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.968 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:26.968 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.968 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.968 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.968 22:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.255 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.823 22:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.081 00:17:28.081 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.081 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.082 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.341 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.341 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.341 22:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.341 22:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.341 22:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.341 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.341 { 00:17:28.341 "cntlid": 117, 00:17:28.341 "qid": 0, 00:17:28.341 "state": "enabled", 00:17:28.341 "thread": "nvmf_tgt_poll_group_000", 00:17:28.341 "listen_address": { 00:17:28.341 "trtype": "TCP", 00:17:28.341 "adrfam": "IPv4", 00:17:28.341 "traddr": "10.0.0.2", 00:17:28.341 "trsvcid": "4420" 00:17:28.341 }, 00:17:28.341 "peer_address": { 00:17:28.341 "trtype": "TCP", 00:17:28.341 "adrfam": "IPv4", 00:17:28.341 "traddr": "10.0.0.1", 00:17:28.341 "trsvcid": "33576" 00:17:28.341 }, 00:17:28.341 "auth": { 00:17:28.341 "state": "completed", 00:17:28.341 "digest": "sha512", 00:17:28.341 "dhgroup": "ffdhe3072" 00:17:28.341 } 00:17:28.341 } 00:17:28.341 ]' 00:17:28.341 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.341 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.341 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.600 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.600 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.600 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.600 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.600 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.600 22:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:17:29.168 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.168 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.168 22:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.168 22:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.425 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.682 00:17:29.682 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.682 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.682 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.941 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.941 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.941 22:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.941 22:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.941 22:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.941 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.941 { 00:17:29.941 "cntlid": 119, 00:17:29.941 "qid": 0, 00:17:29.941 "state": "enabled", 00:17:29.941 "thread": "nvmf_tgt_poll_group_000", 00:17:29.941 "listen_address": { 00:17:29.941 "trtype": "TCP", 00:17:29.941 "adrfam": "IPv4", 00:17:29.941 "traddr": "10.0.0.2", 00:17:29.941 "trsvcid": "4420" 00:17:29.941 }, 00:17:29.941 "peer_address": { 00:17:29.941 "trtype": "TCP", 00:17:29.941 "adrfam": "IPv4", 00:17:29.941 "traddr": "10.0.0.1", 00:17:29.941 "trsvcid": "33600" 00:17:29.941 }, 00:17:29.941 "auth": { 00:17:29.941 "state": "completed", 00:17:29.941 "digest": "sha512", 00:17:29.941 "dhgroup": "ffdhe3072" 00:17:29.941 } 00:17:29.941 } 00:17:29.941 ]' 00:17:29.941 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.941 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.941 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.941 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:29.941 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.200 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.200 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.200 22:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.200 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:17:30.766 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.766 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.766 22:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.766 22:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.766 22:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.766 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.766 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.766 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:30.766 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:31.024 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:31.024 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.024 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.024 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:31.024 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.024 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.024 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.024 22:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.024 22:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.024 22:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.024 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.024 22:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.283 00:17:31.283 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.283 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.283 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.542 { 00:17:31.542 "cntlid": 121, 00:17:31.542 "qid": 0, 00:17:31.542 "state": "enabled", 00:17:31.542 "thread": "nvmf_tgt_poll_group_000", 00:17:31.542 "listen_address": { 00:17:31.542 "trtype": "TCP", 00:17:31.542 "adrfam": "IPv4", 00:17:31.542 "traddr": "10.0.0.2", 00:17:31.542 "trsvcid": "4420" 00:17:31.542 }, 00:17:31.542 "peer_address": { 00:17:31.542 "trtype": "TCP", 00:17:31.542 "adrfam": "IPv4", 00:17:31.542 "traddr": "10.0.0.1", 00:17:31.542 "trsvcid": "33622" 00:17:31.542 }, 00:17:31.542 "auth": { 00:17:31.542 "state": "completed", 00:17:31.542 "digest": "sha512", 00:17:31.542 "dhgroup": "ffdhe4096" 00:17:31.542 } 00:17:31.542 } 00:17:31.542 ]' 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.542 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.801 22:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:17:32.366 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.367 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.367 22:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.367 22:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.367 22:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.367 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.367 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:32.367 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:32.626 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:32.626 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.626 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:32.626 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:32.626 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:32.626 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.626 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.626 22:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.626 22:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.626 22:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.626 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.626 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.884 00:17:32.884 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.884 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.884 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.884 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.884 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.884 22:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.884 22:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.884 22:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.885 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.885 { 00:17:32.885 "cntlid": 123, 00:17:32.885 "qid": 0, 00:17:32.885 "state": "enabled", 00:17:32.885 "thread": "nvmf_tgt_poll_group_000", 00:17:32.885 "listen_address": { 00:17:32.885 "trtype": "TCP", 00:17:32.885 "adrfam": "IPv4", 00:17:32.885 "traddr": "10.0.0.2", 00:17:32.885 "trsvcid": "4420" 00:17:32.885 }, 00:17:32.885 "peer_address": { 00:17:32.885 "trtype": "TCP", 00:17:32.885 "adrfam": "IPv4", 00:17:32.885 "traddr": "10.0.0.1", 00:17:32.885 "trsvcid": "33640" 00:17:32.885 }, 00:17:32.885 "auth": { 00:17:32.885 "state": "completed", 00:17:32.885 "digest": "sha512", 00:17:32.885 "dhgroup": "ffdhe4096" 00:17:32.885 } 00:17:32.885 } 00:17:32.885 ]' 00:17:32.885 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.143 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.143 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.143 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:33.143 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.143 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.143 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.143 22:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.403 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.971 22:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.230 00:17:34.230 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.230 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.230 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.489 { 00:17:34.489 "cntlid": 125, 00:17:34.489 "qid": 0, 00:17:34.489 "state": "enabled", 00:17:34.489 "thread": "nvmf_tgt_poll_group_000", 00:17:34.489 "listen_address": { 00:17:34.489 "trtype": "TCP", 00:17:34.489 "adrfam": "IPv4", 00:17:34.489 "traddr": "10.0.0.2", 00:17:34.489 "trsvcid": "4420" 00:17:34.489 }, 00:17:34.489 "peer_address": { 00:17:34.489 "trtype": "TCP", 00:17:34.489 "adrfam": "IPv4", 00:17:34.489 "traddr": "10.0.0.1", 00:17:34.489 "trsvcid": "49894" 00:17:34.489 }, 00:17:34.489 "auth": { 00:17:34.489 "state": "completed", 00:17:34.489 "digest": "sha512", 00:17:34.489 "dhgroup": "ffdhe4096" 00:17:34.489 } 00:17:34.489 } 00:17:34.489 ]' 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.489 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.748 22:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:17:35.315 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.315 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.315 22:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.315 22:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.315 22:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.315 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.315 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:35.315 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:35.574 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:35.574 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.574 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:35.574 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:35.574 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.574 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.574 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:35.574 22:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.574 22:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.574 22:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.574 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.574 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.833 00:17:35.833 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.833 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.833 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.833 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.101 { 00:17:36.101 "cntlid": 127, 00:17:36.101 "qid": 0, 00:17:36.101 "state": "enabled", 00:17:36.101 "thread": "nvmf_tgt_poll_group_000", 00:17:36.101 "listen_address": { 00:17:36.101 "trtype": "TCP", 00:17:36.101 "adrfam": "IPv4", 00:17:36.101 "traddr": "10.0.0.2", 00:17:36.101 "trsvcid": "4420" 00:17:36.101 }, 00:17:36.101 "peer_address": { 00:17:36.101 "trtype": "TCP", 00:17:36.101 "adrfam": "IPv4", 00:17:36.101 "traddr": "10.0.0.1", 00:17:36.101 "trsvcid": "49924" 00:17:36.101 }, 00:17:36.101 "auth": { 00:17:36.101 "state": "completed", 00:17:36.101 "digest": "sha512", 00:17:36.101 "dhgroup": "ffdhe4096" 00:17:36.101 } 00:17:36.101 } 00:17:36.101 ]' 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.101 22:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.364 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.931 22:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.498 00:17:37.498 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.498 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.498 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.498 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.498 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.498 22:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.498 22:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.498 22:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.499 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.499 { 00:17:37.499 "cntlid": 129, 00:17:37.499 "qid": 0, 00:17:37.499 "state": "enabled", 00:17:37.499 "thread": "nvmf_tgt_poll_group_000", 00:17:37.499 "listen_address": { 00:17:37.499 "trtype": "TCP", 00:17:37.499 "adrfam": "IPv4", 00:17:37.499 "traddr": "10.0.0.2", 00:17:37.499 "trsvcid": "4420" 00:17:37.499 }, 00:17:37.499 "peer_address": { 00:17:37.499 "trtype": "TCP", 00:17:37.499 "adrfam": "IPv4", 00:17:37.499 "traddr": "10.0.0.1", 00:17:37.499 "trsvcid": "49942" 00:17:37.499 }, 00:17:37.499 "auth": { 00:17:37.499 "state": "completed", 00:17:37.499 "digest": "sha512", 00:17:37.499 "dhgroup": "ffdhe6144" 00:17:37.499 } 00:17:37.499 } 00:17:37.499 ]' 00:17:37.499 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.499 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.499 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.757 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.757 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.757 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.757 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.757 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.757 22:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:17:38.326 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.326 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.326 22:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.326 22:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.585 22:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.586 22:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.586 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.586 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.845 00:17:39.104 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.104 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.104 22:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.104 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.104 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.104 22:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.104 22:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.104 22:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.104 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.104 { 00:17:39.104 "cntlid": 131, 00:17:39.104 "qid": 0, 00:17:39.104 "state": "enabled", 00:17:39.104 "thread": "nvmf_tgt_poll_group_000", 00:17:39.104 "listen_address": { 00:17:39.104 "trtype": "TCP", 00:17:39.104 "adrfam": "IPv4", 00:17:39.104 "traddr": "10.0.0.2", 00:17:39.104 "trsvcid": "4420" 00:17:39.104 }, 00:17:39.104 "peer_address": { 00:17:39.104 "trtype": "TCP", 00:17:39.104 "adrfam": "IPv4", 00:17:39.104 "traddr": "10.0.0.1", 00:17:39.104 "trsvcid": "49982" 00:17:39.104 }, 00:17:39.104 "auth": { 00:17:39.104 "state": "completed", 00:17:39.104 "digest": "sha512", 00:17:39.104 "dhgroup": "ffdhe6144" 00:17:39.104 } 00:17:39.104 } 00:17:39.104 ]' 00:17:39.104 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.104 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.104 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.363 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.363 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.363 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.363 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.363 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.621 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:17:40.187 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.187 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.187 22:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.187 22:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.187 22:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.187 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.187 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:40.187 22:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:40.187 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:40.187 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.187 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.187 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:40.187 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:40.187 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.187 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.187 22:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.187 22:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.187 22:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.187 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.187 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.756 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.756 { 00:17:40.756 "cntlid": 133, 00:17:40.756 "qid": 0, 00:17:40.756 "state": "enabled", 00:17:40.756 "thread": "nvmf_tgt_poll_group_000", 00:17:40.756 "listen_address": { 00:17:40.756 "trtype": "TCP", 00:17:40.756 "adrfam": "IPv4", 00:17:40.756 "traddr": "10.0.0.2", 00:17:40.756 "trsvcid": "4420" 00:17:40.756 }, 00:17:40.756 "peer_address": { 00:17:40.756 "trtype": "TCP", 00:17:40.756 "adrfam": "IPv4", 00:17:40.756 "traddr": "10.0.0.1", 00:17:40.756 "trsvcid": "50022" 00:17:40.756 }, 00:17:40.756 "auth": { 00:17:40.756 "state": "completed", 00:17:40.756 "digest": "sha512", 00:17:40.756 "dhgroup": "ffdhe6144" 00:17:40.756 } 00:17:40.756 } 00:17:40.756 ]' 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:40.756 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.015 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.015 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.015 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.015 22:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:17:41.625 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.625 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.625 22:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.625 22:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.625 22:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.625 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.625 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:41.625 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:41.885 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:41.885 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.885 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:41.885 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:41.885 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:41.885 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.885 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:41.885 22:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.885 22:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.885 22:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.885 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.885 22:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.144 00:17:42.144 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.144 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.144 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.403 { 00:17:42.403 "cntlid": 135, 00:17:42.403 "qid": 0, 00:17:42.403 "state": "enabled", 00:17:42.403 "thread": "nvmf_tgt_poll_group_000", 00:17:42.403 "listen_address": { 00:17:42.403 "trtype": "TCP", 00:17:42.403 "adrfam": "IPv4", 00:17:42.403 "traddr": "10.0.0.2", 00:17:42.403 "trsvcid": "4420" 00:17:42.403 }, 00:17:42.403 "peer_address": { 00:17:42.403 "trtype": "TCP", 00:17:42.403 "adrfam": "IPv4", 00:17:42.403 "traddr": "10.0.0.1", 00:17:42.403 "trsvcid": "50048" 00:17:42.403 }, 00:17:42.403 "auth": { 00:17:42.403 "state": "completed", 00:17:42.403 "digest": "sha512", 00:17:42.403 "dhgroup": "ffdhe6144" 00:17:42.403 } 00:17:42.403 } 00:17:42.403 ]' 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.403 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.660 22:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:17:43.227 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.227 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.227 22:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.227 22:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.227 22:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.227 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.227 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.227 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:43.227 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:43.486 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:43.486 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.486 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.486 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:43.486 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:43.486 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.486 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.486 22:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.486 22:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.486 22:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.486 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.486 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.745 00:17:44.003 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.003 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.003 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.003 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.003 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.003 22:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.003 22:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.003 22:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.003 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.003 { 00:17:44.003 "cntlid": 137, 00:17:44.003 "qid": 0, 00:17:44.003 "state": "enabled", 00:17:44.003 "thread": "nvmf_tgt_poll_group_000", 00:17:44.003 "listen_address": { 00:17:44.003 "trtype": "TCP", 00:17:44.003 "adrfam": "IPv4", 00:17:44.003 "traddr": "10.0.0.2", 00:17:44.003 "trsvcid": "4420" 00:17:44.003 }, 00:17:44.003 "peer_address": { 00:17:44.003 "trtype": "TCP", 00:17:44.003 "adrfam": "IPv4", 00:17:44.003 "traddr": "10.0.0.1", 00:17:44.003 "trsvcid": "41928" 00:17:44.003 }, 00:17:44.003 "auth": { 00:17:44.003 "state": "completed", 00:17:44.003 "digest": "sha512", 00:17:44.003 "dhgroup": "ffdhe8192" 00:17:44.003 } 00:17:44.003 } 00:17:44.003 ]' 00:17:44.003 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.003 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.003 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.261 22:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.262 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.262 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.262 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.262 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.262 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:17:44.828 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.828 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.828 22:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.828 22:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.828 22:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.828 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.828 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:44.828 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:45.087 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:45.087 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.087 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.087 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:45.087 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:45.087 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.087 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.087 22:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.087 22:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.087 22:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.087 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.087 22:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.655 00:17:45.655 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.655 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.655 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.914 { 00:17:45.914 "cntlid": 139, 00:17:45.914 "qid": 0, 00:17:45.914 "state": "enabled", 00:17:45.914 "thread": "nvmf_tgt_poll_group_000", 00:17:45.914 "listen_address": { 00:17:45.914 "trtype": "TCP", 00:17:45.914 "adrfam": "IPv4", 00:17:45.914 "traddr": "10.0.0.2", 00:17:45.914 "trsvcid": "4420" 00:17:45.914 }, 00:17:45.914 "peer_address": { 00:17:45.914 "trtype": "TCP", 00:17:45.914 "adrfam": "IPv4", 00:17:45.914 "traddr": "10.0.0.1", 00:17:45.914 "trsvcid": "41952" 00:17:45.914 }, 00:17:45.914 "auth": { 00:17:45.914 "state": "completed", 00:17:45.914 "digest": "sha512", 00:17:45.914 "dhgroup": "ffdhe8192" 00:17:45.914 } 00:17:45.914 } 00:17:45.914 ]' 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.914 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.173 22:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:YTg5ZDFiN2NhMjEyZTY0YzU5YTBiODg5ZWU5ZDgzM2HK8D1h: --dhchap-ctrl-secret DHHC-1:02:YTRlZjY2MmRhNjI1MzdjODFmNGI4MjJkNmRlMDgwOTgwZDE4ODVkMzQ1NWJjN2VlAeePrQ==: 00:17:46.741 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.741 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.741 22:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.741 22:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.741 22:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.741 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.741 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:46.741 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:47.000 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:47.000 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.000 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.000 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:47.000 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:47.000 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.000 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.000 22:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.000 22:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.000 22:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.000 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.000 22:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.259 00:17:47.260 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.260 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.260 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.518 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.518 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.518 22:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.518 22:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.518 22:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.518 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.518 { 00:17:47.518 "cntlid": 141, 00:17:47.518 "qid": 0, 00:17:47.518 "state": "enabled", 00:17:47.519 "thread": "nvmf_tgt_poll_group_000", 00:17:47.519 "listen_address": { 00:17:47.519 "trtype": "TCP", 00:17:47.519 "adrfam": "IPv4", 00:17:47.519 "traddr": "10.0.0.2", 00:17:47.519 "trsvcid": "4420" 00:17:47.519 }, 00:17:47.519 "peer_address": { 00:17:47.519 "trtype": "TCP", 00:17:47.519 "adrfam": "IPv4", 00:17:47.519 "traddr": "10.0.0.1", 00:17:47.519 "trsvcid": "41980" 00:17:47.519 }, 00:17:47.519 "auth": { 00:17:47.519 "state": "completed", 00:17:47.519 "digest": "sha512", 00:17:47.519 "dhgroup": "ffdhe8192" 00:17:47.519 } 00:17:47.519 } 00:17:47.519 ]' 00:17:47.519 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.519 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.519 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.519 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.519 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.777 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.778 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.778 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.778 22:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:Njc3OWQ0Y2M3ZjNjMmQxYzVlMjNhNWVjYTA4NTQzMDkwYzAwZTExNmE4ZTE0ODg2gtNdCQ==: --dhchap-ctrl-secret DHHC-1:01:YzIwZjU0YTM4NWY2NmU3MDAwOGU0NTY5Yjg3ZDc4NGUMHgTg: 00:17:48.345 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.345 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.345 22:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.345 22:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.345 22:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.345 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.345 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:48.345 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:48.604 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:48.604 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.604 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.604 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:48.604 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.604 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.604 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:48.604 22:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.604 22:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.604 22:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.605 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.605 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.173 00:17:49.173 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.173 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.173 22:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.173 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.173 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.173 22:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.173 22:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.173 22:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.173 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.173 { 00:17:49.173 "cntlid": 143, 00:17:49.173 "qid": 0, 00:17:49.173 "state": "enabled", 00:17:49.173 "thread": "nvmf_tgt_poll_group_000", 00:17:49.173 "listen_address": { 00:17:49.173 "trtype": "TCP", 00:17:49.173 "adrfam": "IPv4", 00:17:49.173 "traddr": "10.0.0.2", 00:17:49.173 "trsvcid": "4420" 00:17:49.173 }, 00:17:49.173 "peer_address": { 00:17:49.173 "trtype": "TCP", 00:17:49.173 "adrfam": "IPv4", 00:17:49.173 "traddr": "10.0.0.1", 00:17:49.173 "trsvcid": "42020" 00:17:49.173 }, 00:17:49.173 "auth": { 00:17:49.173 "state": "completed", 00:17:49.174 "digest": "sha512", 00:17:49.174 "dhgroup": "ffdhe8192" 00:17:49.174 } 00:17:49.174 } 00:17:49.174 ]' 00:17:49.174 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.433 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.433 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.433 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.433 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.433 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.433 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.433 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.691 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:17:50.258 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.258 22:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.258 22:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.258 22:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.258 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.826 00:17:50.826 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.826 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.826 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.085 { 00:17:51.085 "cntlid": 145, 00:17:51.085 "qid": 0, 00:17:51.085 "state": "enabled", 00:17:51.085 "thread": "nvmf_tgt_poll_group_000", 00:17:51.085 "listen_address": { 00:17:51.085 "trtype": "TCP", 00:17:51.085 "adrfam": "IPv4", 00:17:51.085 "traddr": "10.0.0.2", 00:17:51.085 "trsvcid": "4420" 00:17:51.085 }, 00:17:51.085 "peer_address": { 00:17:51.085 "trtype": "TCP", 00:17:51.085 "adrfam": "IPv4", 00:17:51.085 "traddr": "10.0.0.1", 00:17:51.085 "trsvcid": "42046" 00:17:51.085 }, 00:17:51.085 "auth": { 00:17:51.085 "state": "completed", 00:17:51.085 "digest": "sha512", 00:17:51.085 "dhgroup": "ffdhe8192" 00:17:51.085 } 00:17:51.085 } 00:17:51.085 ]' 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.085 22:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.344 22:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:Yzg0MGYwMjc1NmM3YmE4MmNmZWNkNGY5NDMxZGFlMzY4NjA1OWZkZTVkMGJkZjhlTjKeaQ==: --dhchap-ctrl-secret DHHC-1:03:OWRjYjY4OTViMWNhNjI2NTRmYTc5MjIwMjUyMjM1MTYyZmZkYzM2ZjIxZTE3NTBiYjBhOWQyYzUxNDE2NzI0ZOpAtHY=: 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.911 22:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:52.477 request: 00:17:52.477 { 00:17:52.477 "name": "nvme0", 00:17:52.477 "trtype": "tcp", 00:17:52.477 "traddr": "10.0.0.2", 00:17:52.477 "adrfam": "ipv4", 00:17:52.477 "trsvcid": "4420", 00:17:52.477 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.477 "prchk_reftag": false, 00:17:52.477 "prchk_guard": false, 00:17:52.477 "hdgst": false, 00:17:52.477 "ddgst": false, 00:17:52.477 "dhchap_key": "key2", 00:17:52.477 "method": "bdev_nvme_attach_controller", 00:17:52.477 "req_id": 1 00:17:52.477 } 00:17:52.477 Got JSON-RPC error response 00:17:52.477 response: 00:17:52.477 { 00:17:52.477 "code": -5, 00:17:52.477 "message": "Input/output error" 00:17:52.477 } 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.477 22:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:52.743 request: 00:17:52.743 { 00:17:52.743 "name": "nvme0", 00:17:52.743 "trtype": "tcp", 00:17:52.743 "traddr": "10.0.0.2", 00:17:52.743 "adrfam": "ipv4", 00:17:52.743 "trsvcid": "4420", 00:17:52.743 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:52.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.743 "prchk_reftag": false, 00:17:52.743 "prchk_guard": false, 00:17:52.743 "hdgst": false, 00:17:52.743 "ddgst": false, 00:17:52.743 "dhchap_key": "key1", 00:17:52.743 "dhchap_ctrlr_key": "ckey2", 00:17:52.743 "method": "bdev_nvme_attach_controller", 00:17:52.743 "req_id": 1 00:17:52.743 } 00:17:52.743 Got JSON-RPC error response 00:17:52.743 response: 00:17:52.743 { 00:17:52.743 "code": -5, 00:17:52.743 "message": "Input/output error" 00:17:52.743 } 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.743 22:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.311 request: 00:17:53.311 { 00:17:53.311 "name": "nvme0", 00:17:53.311 "trtype": "tcp", 00:17:53.311 "traddr": "10.0.0.2", 00:17:53.311 "adrfam": "ipv4", 00:17:53.311 "trsvcid": "4420", 00:17:53.311 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:53.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:53.311 "prchk_reftag": false, 00:17:53.311 "prchk_guard": false, 00:17:53.311 "hdgst": false, 00:17:53.311 "ddgst": false, 00:17:53.311 "dhchap_key": "key1", 00:17:53.311 "dhchap_ctrlr_key": "ckey1", 00:17:53.311 "method": "bdev_nvme_attach_controller", 00:17:53.311 "req_id": 1 00:17:53.311 } 00:17:53.311 Got JSON-RPC error response 00:17:53.311 response: 00:17:53.311 { 00:17:53.311 "code": -5, 00:17:53.311 "message": "Input/output error" 00:17:53.311 } 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 4191612 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4191612 ']' 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4191612 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4191612 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4191612' 00:17:53.311 killing process with pid 4191612 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4191612 00:17:53.311 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4191612 00:17:53.570 22:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:53.570 22:34:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:53.570 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:53.570 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.570 22:34:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=18905 00:17:53.570 22:34:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 18905 00:17:53.570 22:34:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:53.570 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 18905 ']' 00:17:53.570 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.571 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.571 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.571 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.571 22:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 18905 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 18905 ']' 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.509 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.769 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.769 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:54.769 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.769 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.769 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:54.769 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.769 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.769 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:54.769 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.769 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.770 22:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.770 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.770 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.029 00:17:55.029 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.029 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.029 22:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.289 22:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.289 22:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.289 22:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.289 22:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.289 22:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.289 22:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.289 { 00:17:55.289 "cntlid": 1, 00:17:55.289 "qid": 0, 00:17:55.289 "state": "enabled", 00:17:55.289 "thread": "nvmf_tgt_poll_group_000", 00:17:55.289 "listen_address": { 00:17:55.289 "trtype": "TCP", 00:17:55.289 "adrfam": "IPv4", 00:17:55.289 "traddr": "10.0.0.2", 00:17:55.289 "trsvcid": "4420" 00:17:55.289 }, 00:17:55.289 "peer_address": { 00:17:55.289 "trtype": "TCP", 00:17:55.289 "adrfam": "IPv4", 00:17:55.289 "traddr": "10.0.0.1", 00:17:55.289 "trsvcid": "51540" 00:17:55.289 }, 00:17:55.289 "auth": { 00:17:55.289 "state": "completed", 00:17:55.289 "digest": "sha512", 00:17:55.289 "dhgroup": "ffdhe8192" 00:17:55.289 } 00:17:55.289 } 00:17:55.289 ]' 00:17:55.289 22:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.289 22:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.289 22:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.548 22:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.548 22:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.548 22:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.548 22:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.548 22:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.548 22:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:NmQ3NGU4ODcyMTlhMDAxZDA4MTA1YTk2MDMzNTdlYjEzNTI4YzJlYjhlZjBiZmVkYjRmOGIwNTI3NzA3N2U4NOUOBPk=: 00:17:56.126 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.126 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.126 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.126 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.126 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.126 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:56.126 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.126 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.126 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.126 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:56.126 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.484 request: 00:17:56.484 { 00:17:56.484 "name": "nvme0", 00:17:56.484 "trtype": "tcp", 00:17:56.484 "traddr": "10.0.0.2", 00:17:56.484 "adrfam": "ipv4", 00:17:56.484 "trsvcid": "4420", 00:17:56.484 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:56.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:56.484 "prchk_reftag": false, 00:17:56.484 "prchk_guard": false, 00:17:56.484 "hdgst": false, 00:17:56.484 "ddgst": false, 00:17:56.484 "dhchap_key": "key3", 00:17:56.484 "method": "bdev_nvme_attach_controller", 00:17:56.484 "req_id": 1 00:17:56.484 } 00:17:56.484 Got JSON-RPC error response 00:17:56.484 response: 00:17:56.484 { 00:17:56.484 "code": -5, 00:17:56.484 "message": "Input/output error" 00:17:56.484 } 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:56.484 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:56.743 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.743 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:56.743 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.743 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:56.743 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.743 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:56.743 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:56.743 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.744 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:57.003 request: 00:17:57.003 { 00:17:57.003 "name": "nvme0", 00:17:57.003 "trtype": "tcp", 00:17:57.003 "traddr": "10.0.0.2", 00:17:57.003 "adrfam": "ipv4", 00:17:57.003 "trsvcid": "4420", 00:17:57.003 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:57.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:57.003 "prchk_reftag": false, 00:17:57.003 "prchk_guard": false, 00:17:57.003 "hdgst": false, 00:17:57.003 "ddgst": false, 00:17:57.003 "dhchap_key": "key3", 00:17:57.003 "method": "bdev_nvme_attach_controller", 00:17:57.003 "req_id": 1 00:17:57.003 } 00:17:57.003 Got JSON-RPC error response 00:17:57.003 response: 00:17:57.003 { 00:17:57.003 "code": -5, 00:17:57.003 "message": "Input/output error" 00:17:57.003 } 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:57.003 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:57.004 22:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.004 22:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.262 request: 00:17:57.262 { 00:17:57.262 "name": "nvme0", 00:17:57.262 "trtype": "tcp", 00:17:57.262 "traddr": "10.0.0.2", 00:17:57.262 "adrfam": "ipv4", 00:17:57.262 "trsvcid": "4420", 00:17:57.262 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:57.262 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:57.262 "prchk_reftag": false, 00:17:57.262 "prchk_guard": false, 00:17:57.262 "hdgst": false, 00:17:57.262 "ddgst": false, 00:17:57.262 "dhchap_key": "key0", 00:17:57.262 "dhchap_ctrlr_key": "key1", 00:17:57.262 "method": "bdev_nvme_attach_controller", 00:17:57.262 "req_id": 1 00:17:57.262 } 00:17:57.262 Got JSON-RPC error response 00:17:57.262 response: 00:17:57.262 { 00:17:57.262 "code": -5, 00:17:57.262 "message": "Input/output error" 00:17:57.262 } 00:17:57.262 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:57.262 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:57.262 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:57.262 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:57.262 22:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:57.262 22:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:57.522 00:17:57.522 22:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:57.522 22:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:57.522 22:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.781 22:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.781 22:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.781 22:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.781 22:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:57.781 22:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:57.781 22:34:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4191857 00:17:57.781 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 4191857 ']' 00:17:57.781 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 4191857 00:17:57.781 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:57.781 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:57.781 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 4191857 00:17:58.040 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:58.040 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:58.040 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 4191857' 00:17:58.040 killing process with pid 4191857 00:17:58.040 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 4191857 00:17:58.040 22:34:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 4191857 00:17:58.299 22:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:58.300 rmmod nvme_tcp 00:17:58.300 rmmod nvme_fabrics 00:17:58.300 rmmod nvme_keyring 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 18905 ']' 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 18905 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 18905 ']' 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 18905 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 18905 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 18905' 00:17:58.300 killing process with pid 18905 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 18905 00:17:58.300 22:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 18905 00:17:58.559 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:58.559 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:58.559 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:58.559 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:58.559 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:58.559 22:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.559 22:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.559 22:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.092 22:34:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:01.092 22:34:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.yCZ /tmp/spdk.key-sha256.7mT /tmp/spdk.key-sha384.lHB /tmp/spdk.key-sha512.PrT /tmp/spdk.key-sha512.MBm /tmp/spdk.key-sha384.8WY /tmp/spdk.key-sha256.pBx '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:01.092 00:18:01.092 real 2m10.396s 00:18:01.093 user 4m59.428s 00:18:01.093 sys 0m20.316s 00:18:01.093 22:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:01.093 22:34:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.093 ************************************ 00:18:01.093 END TEST nvmf_auth_target 00:18:01.093 ************************************ 00:18:01.093 22:34:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:01.093 22:34:24 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:01.093 22:34:24 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:01.093 22:34:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:01.093 22:34:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.093 22:34:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:01.093 ************************************ 00:18:01.093 START TEST nvmf_bdevio_no_huge 00:18:01.093 ************************************ 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:01.093 * Looking for test storage... 00:18:01.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:01.093 22:34:24 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:06.367 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:06.367 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:06.367 Found net devices under 0000:86:00.0: cvl_0_0 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:06.367 Found net devices under 0000:86:00.1: cvl_0_1 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:06.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:18:06.367 00:18:06.367 --- 10.0.0.2 ping statistics --- 00:18:06.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.367 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:06.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:18:06.367 00:18:06.367 --- 10.0.0.1 ping statistics --- 00:18:06.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.367 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:06.367 22:34:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:06.367 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:06.367 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:06.367 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:06.367 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:06.367 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=23171 00:18:06.367 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:06.367 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 23171 00:18:06.367 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 23171 ']' 00:18:06.367 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.367 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.368 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.368 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.368 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:06.368 [2024-07-15 22:34:30.079288] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:06.368 [2024-07-15 22:34:30.079332] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:06.368 [2024-07-15 22:34:30.141633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:06.368 [2024-07-15 22:34:30.228664] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.368 [2024-07-15 22:34:30.228704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.368 [2024-07-15 22:34:30.228711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.368 [2024-07-15 22:34:30.228717] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.368 [2024-07-15 22:34:30.228722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.368 [2024-07-15 22:34:30.228778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:06.368 [2024-07-15 22:34:30.228884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:06.368 [2024-07-15 22:34:30.228968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:06.368 [2024-07-15 22:34:30.228969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:06.936 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.936 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:06.936 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:06.936 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:06.936 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.195 [2024-07-15 22:34:30.919652] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.195 Malloc0 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:07.195 [2024-07-15 22:34:30.963938] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:07.195 { 00:18:07.195 "params": { 00:18:07.195 "name": "Nvme$subsystem", 00:18:07.195 "trtype": "$TEST_TRANSPORT", 00:18:07.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:07.195 "adrfam": "ipv4", 00:18:07.195 "trsvcid": "$NVMF_PORT", 00:18:07.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:07.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:07.195 "hdgst": ${hdgst:-false}, 00:18:07.195 "ddgst": ${ddgst:-false} 00:18:07.195 }, 00:18:07.195 "method": "bdev_nvme_attach_controller" 00:18:07.195 } 00:18:07.195 EOF 00:18:07.195 )") 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:07.195 22:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:07.195 "params": { 00:18:07.195 "name": "Nvme1", 00:18:07.195 "trtype": "tcp", 00:18:07.195 "traddr": "10.0.0.2", 00:18:07.195 "adrfam": "ipv4", 00:18:07.195 "trsvcid": "4420", 00:18:07.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.195 "hdgst": false, 00:18:07.195 "ddgst": false 00:18:07.195 }, 00:18:07.195 "method": "bdev_nvme_attach_controller" 00:18:07.195 }' 00:18:07.195 [2024-07-15 22:34:31.012894] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:07.195 [2024-07-15 22:34:31.012944] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid23424 ] 00:18:07.195 [2024-07-15 22:34:31.069907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:07.195 [2024-07-15 22:34:31.156067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.195 [2024-07-15 22:34:31.156162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.195 [2024-07-15 22:34:31.156164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.796 I/O targets: 00:18:07.796 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:07.796 00:18:07.796 00:18:07.796 CUnit - A unit testing framework for C - Version 2.1-3 00:18:07.796 http://cunit.sourceforge.net/ 00:18:07.796 00:18:07.796 00:18:07.796 Suite: bdevio tests on: Nvme1n1 00:18:07.796 Test: blockdev write read block ...passed 00:18:07.796 Test: blockdev write zeroes read block ...passed 00:18:07.796 Test: blockdev write zeroes read no split ...passed 00:18:07.796 Test: blockdev write zeroes read split ...passed 00:18:07.796 Test: blockdev write zeroes read split partial ...passed 00:18:07.796 Test: blockdev reset ...[2024-07-15 22:34:31.673626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:07.796 [2024-07-15 22:34:31.673687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b0300 (9): Bad file descriptor 00:18:07.796 [2024-07-15 22:34:31.692533] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:07.796 passed 00:18:07.796 Test: blockdev write read 8 blocks ...passed 00:18:07.796 Test: blockdev write read size > 128k ...passed 00:18:07.796 Test: blockdev write read invalid size ...passed 00:18:08.056 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:08.056 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:08.056 Test: blockdev write read max offset ...passed 00:18:08.056 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:08.056 Test: blockdev writev readv 8 blocks ...passed 00:18:08.056 Test: blockdev writev readv 30 x 1block ...passed 00:18:08.056 Test: blockdev writev readv block ...passed 00:18:08.056 Test: blockdev writev readv size > 128k ...passed 00:18:08.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:08.056 Test: blockdev comparev and writev ...[2024-07-15 22:34:31.945886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.056 [2024-07-15 22:34:31.945915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.056 [2024-07-15 22:34:31.945929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.056 [2024-07-15 22:34:31.945937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.056 [2024-07-15 22:34:31.946229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.056 [2024-07-15 22:34:31.946240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:08.056 [2024-07-15 22:34:31.946251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.056 [2024-07-15 22:34:31.946259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:08.056 [2024-07-15 22:34:31.946542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.056 [2024-07-15 22:34:31.946551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:08.056 [2024-07-15 22:34:31.946562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.056 [2024-07-15 22:34:31.946570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:08.056 [2024-07-15 22:34:31.946842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.056 [2024-07-15 22:34:31.946855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:08.056 [2024-07-15 22:34:31.946867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.056 [2024-07-15 22:34:31.946875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:08.056 passed 00:18:08.316 Test: blockdev nvme passthru rw ...passed 00:18:08.316 Test: blockdev nvme passthru vendor specific ...[2024-07-15 22:34:32.028602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.316 [2024-07-15 22:34:32.028620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:08.316 [2024-07-15 22:34:32.028781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.316 [2024-07-15 22:34:32.028790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:08.316 [2024-07-15 22:34:32.028949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.316 [2024-07-15 22:34:32.028958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:08.316 [2024-07-15 22:34:32.029104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.316 [2024-07-15 22:34:32.029113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:08.316 passed 00:18:08.316 Test: blockdev nvme admin passthru ...passed 00:18:08.316 Test: blockdev copy ...passed 00:18:08.316 00:18:08.316 Run Summary: Type Total Ran Passed Failed Inactive 00:18:08.316 suites 1 1 n/a 0 0 00:18:08.316 tests 23 23 23 0 0 00:18:08.316 asserts 152 152 152 0 n/a 00:18:08.316 00:18:08.316 Elapsed time = 1.244 seconds 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.576 rmmod nvme_tcp 00:18:08.576 rmmod nvme_fabrics 00:18:08.576 rmmod nvme_keyring 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 23171 ']' 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 23171 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 23171 ']' 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 23171 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 23171 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 23171' 00:18:08.576 killing process with pid 23171 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 23171 00:18:08.576 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 23171 00:18:08.837 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:08.837 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:08.837 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:08.837 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:08.837 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:08.837 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.837 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.837 22:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.386 22:34:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:11.386 00:18:11.386 real 0m10.325s 00:18:11.386 user 0m14.120s 00:18:11.386 sys 0m4.895s 00:18:11.386 22:34:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:11.386 22:34:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:11.386 ************************************ 00:18:11.386 END TEST nvmf_bdevio_no_huge 00:18:11.386 ************************************ 00:18:11.386 22:34:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:11.386 22:34:34 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:11.386 22:34:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:11.386 22:34:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:11.386 22:34:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:11.386 ************************************ 00:18:11.386 START TEST nvmf_tls 00:18:11.386 ************************************ 00:18:11.386 22:34:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:11.386 * Looking for test storage... 00:18:11.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:11.386 22:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.663 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.663 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:16.663 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:16.663 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:16.663 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:16.663 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:16.663 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:16.663 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:16.663 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:16.663 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:16.663 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:16.663 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:16.664 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:16.664 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:16.664 Found net devices under 0000:86:00.0: cvl_0_0 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:16.664 Found net devices under 0000:86:00.1: cvl_0_1 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:16.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:18:16.664 00:18:16.664 --- 10.0.0.2 ping statistics --- 00:18:16.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.664 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:18:16.664 00:18:16.664 --- 10.0.0.1 ping statistics --- 00:18:16.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.664 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=27169 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 27169 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 27169 ']' 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.664 22:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.924 [2024-07-15 22:34:40.677786] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:16.924 [2024-07-15 22:34:40.677835] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.924 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.924 [2024-07-15 22:34:40.736011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.924 [2024-07-15 22:34:40.809045] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.924 [2024-07-15 22:34:40.809085] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.924 [2024-07-15 22:34:40.809091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.924 [2024-07-15 22:34:40.809097] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.924 [2024-07-15 22:34:40.809101] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.924 [2024-07-15 22:34:40.809140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.861 22:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.861 22:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:17.861 22:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.861 22:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:17.861 22:34:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.861 22:34:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.861 22:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:17.861 22:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:17.861 true 00:18:17.861 22:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:17.861 22:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:18.120 22:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:18.120 22:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:18.120 22:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:18.120 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.120 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:18.448 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:18.448 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:18.448 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:18.448 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.448 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:18.707 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:18.708 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:18.708 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.708 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:18.967 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:18.967 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:18.967 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:18.967 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.967 22:34:42 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:19.226 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:19.226 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:19.226 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:19.487 22:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:19.746 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:19.746 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:19.746 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.kdJW0JtUCI 00:18:19.746 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:19.746 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.spn1eIeNSl 00:18:19.746 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:19.746 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:19.746 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.kdJW0JtUCI 00:18:19.746 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.spn1eIeNSl 00:18:19.746 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:19.746 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:20.004 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.kdJW0JtUCI 00:18:20.004 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kdJW0JtUCI 00:18:20.004 22:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:20.263 [2024-07-15 22:34:44.065889] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.263 22:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:20.522 22:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:20.522 [2024-07-15 22:34:44.378683] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:20.522 [2024-07-15 22:34:44.378899] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.522 22:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:20.782 malloc0 00:18:20.782 22:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:20.782 22:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kdJW0JtUCI 00:18:21.040 [2024-07-15 22:34:44.896423] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:21.041 22:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.kdJW0JtUCI 00:18:21.041 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.254 Initializing NVMe Controllers 00:18:33.254 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:33.254 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:33.254 Initialization complete. Launching workers. 00:18:33.254 ======================================================== 00:18:33.254 Latency(us) 00:18:33.254 Device Information : IOPS MiB/s Average min max 00:18:33.254 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16388.89 64.02 3905.53 730.51 5974.94 00:18:33.254 ======================================================== 00:18:33.254 Total : 16388.89 64.02 3905.53 730.51 5974.94 00:18:33.254 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kdJW0JtUCI 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kdJW0JtUCI' 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=29520 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 29520 /var/tmp/bdevperf.sock 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 29520 ']' 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.254 [2024-07-15 22:34:55.061960] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:33.254 [2024-07-15 22:34:55.062009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid29520 ] 00:18:33.254 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.254 [2024-07-15 22:34:55.111169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.254 [2024-07-15 22:34:55.183892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.254 22:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:33.255 22:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kdJW0JtUCI 00:18:33.255 [2024-07-15 22:34:56.030959] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.255 [2024-07-15 22:34:56.031028] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:33.255 TLSTESTn1 00:18:33.255 22:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:33.255 Running I/O for 10 seconds... 00:18:43.234 00:18:43.234 Latency(us) 00:18:43.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.234 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:43.234 Verification LBA range: start 0x0 length 0x2000 00:18:43.234 TLSTESTn1 : 10.02 5653.13 22.08 0.00 0.00 22604.10 6240.17 40575.33 00:18:43.234 =================================================================================================================== 00:18:43.234 Total : 5653.13 22.08 0.00 0.00 22604.10 6240.17 40575.33 00:18:43.234 0 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 29520 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 29520 ']' 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 29520 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 29520 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 29520' 00:18:43.234 killing process with pid 29520 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 29520 00:18:43.234 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.234 00:18:43.234 Latency(us) 00:18:43.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.234 =================================================================================================================== 00:18:43.234 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.234 [2024-07-15 22:35:06.312689] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 29520 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.spn1eIeNSl 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.spn1eIeNSl 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.spn1eIeNSl 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.spn1eIeNSl' 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=31362 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 31362 /var/tmp/bdevperf.sock 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 31362 ']' 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.234 22:35:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.234 [2024-07-15 22:35:06.542530] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:43.234 [2024-07-15 22:35:06.542578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid31362 ] 00:18:43.234 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.234 [2024-07-15 22:35:06.591627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.234 [2024-07-15 22:35:06.659367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.493 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.493 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:43.493 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.spn1eIeNSl 00:18:43.752 [2024-07-15 22:35:07.509782] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.752 [2024-07-15 22:35:07.509854] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:43.752 [2024-07-15 22:35:07.516840] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:43.752 [2024-07-15 22:35:07.517158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cf570 (107): Transport endpoint is not connected 00:18:43.752 [2024-07-15 22:35:07.518152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15cf570 (9): Bad file descriptor 00:18:43.752 [2024-07-15 22:35:07.519153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:43.752 [2024-07-15 22:35:07.519162] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:43.752 [2024-07-15 22:35:07.519171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:43.752 request: 00:18:43.752 { 00:18:43.752 "name": "TLSTEST", 00:18:43.752 "trtype": "tcp", 00:18:43.752 "traddr": "10.0.0.2", 00:18:43.752 "adrfam": "ipv4", 00:18:43.752 "trsvcid": "4420", 00:18:43.752 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.752 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.752 "prchk_reftag": false, 00:18:43.752 "prchk_guard": false, 00:18:43.752 "hdgst": false, 00:18:43.752 "ddgst": false, 00:18:43.752 "psk": "/tmp/tmp.spn1eIeNSl", 00:18:43.752 "method": "bdev_nvme_attach_controller", 00:18:43.752 "req_id": 1 00:18:43.752 } 00:18:43.752 Got JSON-RPC error response 00:18:43.752 response: 00:18:43.752 { 00:18:43.752 "code": -5, 00:18:43.752 "message": "Input/output error" 00:18:43.752 } 00:18:43.752 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 31362 00:18:43.752 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 31362 ']' 00:18:43.752 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 31362 00:18:43.752 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:43.752 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.752 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 31362 00:18:43.752 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:43.752 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:43.752 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 31362' 00:18:43.752 killing process with pid 31362 00:18:43.752 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 31362 00:18:43.752 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.752 00:18:43.752 Latency(us) 00:18:43.752 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.752 =================================================================================================================== 00:18:43.752 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:43.752 [2024-07-15 22:35:07.585653] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:43.752 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 31362 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kdJW0JtUCI 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kdJW0JtUCI 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kdJW0JtUCI 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kdJW0JtUCI' 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=31600 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 31600 /var/tmp/bdevperf.sock 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 31600 ']' 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.012 22:35:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.012 [2024-07-15 22:35:07.806185] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:44.012 [2024-07-15 22:35:07.806241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid31600 ] 00:18:44.012 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.012 [2024-07-15 22:35:07.855459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.012 [2024-07-15 22:35:07.922806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.kdJW0JtUCI 00:18:44.948 [2024-07-15 22:35:08.765299] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.948 [2024-07-15 22:35:08.765376] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:44.948 [2024-07-15 22:35:08.769961] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:44.948 [2024-07-15 22:35:08.769984] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:44.948 [2024-07-15 22:35:08.770008] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:44.948 [2024-07-15 22:35:08.770629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79e570 (107): Transport endpoint is not connected 00:18:44.948 [2024-07-15 22:35:08.771620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79e570 (9): Bad file descriptor 00:18:44.948 [2024-07-15 22:35:08.772621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:44.948 [2024-07-15 22:35:08.772630] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:44.948 [2024-07-15 22:35:08.772639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:44.948 request: 00:18:44.948 { 00:18:44.948 "name": "TLSTEST", 00:18:44.948 "trtype": "tcp", 00:18:44.948 "traddr": "10.0.0.2", 00:18:44.948 "adrfam": "ipv4", 00:18:44.948 "trsvcid": "4420", 00:18:44.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.948 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:44.948 "prchk_reftag": false, 00:18:44.948 "prchk_guard": false, 00:18:44.948 "hdgst": false, 00:18:44.948 "ddgst": false, 00:18:44.948 "psk": "/tmp/tmp.kdJW0JtUCI", 00:18:44.948 "method": "bdev_nvme_attach_controller", 00:18:44.948 "req_id": 1 00:18:44.948 } 00:18:44.948 Got JSON-RPC error response 00:18:44.948 response: 00:18:44.948 { 00:18:44.948 "code": -5, 00:18:44.948 "message": "Input/output error" 00:18:44.948 } 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 31600 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 31600 ']' 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 31600 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 31600 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 31600' 00:18:44.948 killing process with pid 31600 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 31600 00:18:44.948 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.948 00:18:44.948 Latency(us) 00:18:44.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.948 =================================================================================================================== 00:18:44.948 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:44.948 [2024-07-15 22:35:08.833820] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:44.948 22:35:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 31600 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kdJW0JtUCI 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kdJW0JtUCI 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kdJW0JtUCI 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kdJW0JtUCI' 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=31835 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 31835 /var/tmp/bdevperf.sock 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 31835 ']' 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:45.207 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.207 [2024-07-15 22:35:09.055828] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:45.207 [2024-07-15 22:35:09.055877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid31835 ] 00:18:45.207 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.207 [2024-07-15 22:35:09.105533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.207 [2024-07-15 22:35:09.172475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.143 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:46.143 22:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:46.143 22:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kdJW0JtUCI 00:18:46.143 [2024-07-15 22:35:10.014133] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.143 [2024-07-15 22:35:10.014210] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:46.143 [2024-07-15 22:35:10.023140] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:46.143 [2024-07-15 22:35:10.023164] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:46.143 [2024-07-15 22:35:10.023191] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:46.143 [2024-07-15 22:35:10.023603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f3570 (107): Transport endpoint is not connected 00:18:46.143 [2024-07-15 22:35:10.024595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f3570 (9): Bad file descriptor 00:18:46.143 [2024-07-15 22:35:10.025597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:46.143 [2024-07-15 22:35:10.025609] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:46.143 [2024-07-15 22:35:10.025621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:46.143 request: 00:18:46.143 { 00:18:46.143 "name": "TLSTEST", 00:18:46.143 "trtype": "tcp", 00:18:46.143 "traddr": "10.0.0.2", 00:18:46.143 "adrfam": "ipv4", 00:18:46.143 "trsvcid": "4420", 00:18:46.143 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:46.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.143 "prchk_reftag": false, 00:18:46.143 "prchk_guard": false, 00:18:46.143 "hdgst": false, 00:18:46.143 "ddgst": false, 00:18:46.143 "psk": "/tmp/tmp.kdJW0JtUCI", 00:18:46.143 "method": "bdev_nvme_attach_controller", 00:18:46.143 "req_id": 1 00:18:46.143 } 00:18:46.143 Got JSON-RPC error response 00:18:46.143 response: 00:18:46.143 { 00:18:46.143 "code": -5, 00:18:46.143 "message": "Input/output error" 00:18:46.143 } 00:18:46.143 22:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 31835 00:18:46.143 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 31835 ']' 00:18:46.143 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 31835 00:18:46.143 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:46.143 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.143 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 31835 00:18:46.143 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:46.143 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:46.143 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 31835' 00:18:46.143 killing process with pid 31835 00:18:46.143 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 31835 00:18:46.143 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.143 00:18:46.143 Latency(us) 00:18:46.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.143 =================================================================================================================== 00:18:46.143 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:46.143 [2024-07-15 22:35:10.086195] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:46.143 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 31835 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=32048 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 32048 /var/tmp/bdevperf.sock 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 32048 ']' 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:46.401 22:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.401 [2024-07-15 22:35:10.313909] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:46.401 [2024-07-15 22:35:10.313961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid32048 ] 00:18:46.401 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.401 [2024-07-15 22:35:10.364379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.659 [2024-07-15 22:35:10.434400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.227 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:47.227 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:47.227 22:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:47.486 [2024-07-15 22:35:11.256294] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:47.486 [2024-07-15 22:35:11.258259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b6eaf0 (9): Bad file descriptor 00:18:47.486 [2024-07-15 22:35:11.259257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:47.486 [2024-07-15 22:35:11.259267] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:47.486 [2024-07-15 22:35:11.259276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:47.486 request: 00:18:47.486 { 00:18:47.486 "name": "TLSTEST", 00:18:47.486 "trtype": "tcp", 00:18:47.486 "traddr": "10.0.0.2", 00:18:47.486 "adrfam": "ipv4", 00:18:47.486 "trsvcid": "4420", 00:18:47.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.486 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.486 "prchk_reftag": false, 00:18:47.486 "prchk_guard": false, 00:18:47.486 "hdgst": false, 00:18:47.486 "ddgst": false, 00:18:47.486 "method": "bdev_nvme_attach_controller", 00:18:47.486 "req_id": 1 00:18:47.486 } 00:18:47.486 Got JSON-RPC error response 00:18:47.486 response: 00:18:47.486 { 00:18:47.486 "code": -5, 00:18:47.486 "message": "Input/output error" 00:18:47.486 } 00:18:47.486 22:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 32048 00:18:47.486 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 32048 ']' 00:18:47.486 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 32048 00:18:47.486 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:47.486 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:47.486 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 32048 00:18:47.486 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:47.486 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:47.486 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 32048' 00:18:47.486 killing process with pid 32048 00:18:47.486 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 32048 00:18:47.486 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.486 00:18:47.486 Latency(us) 00:18:47.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.486 =================================================================================================================== 00:18:47.486 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.486 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 32048 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 27169 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 27169 ']' 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 27169 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 27169 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 27169' 00:18:47.745 killing process with pid 27169 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 27169 00:18:47.745 [2024-07-15 22:35:11.537141] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:47.745 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 27169 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.rYhEwGqGHy 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.rYhEwGqGHy 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=32324 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 32324 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 32324 ']' 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.005 22:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.005 [2024-07-15 22:35:11.836739] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:48.005 [2024-07-15 22:35:11.836788] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.005 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.005 [2024-07-15 22:35:11.893319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.005 [2024-07-15 22:35:11.960405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.005 [2024-07-15 22:35:11.960445] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.005 [2024-07-15 22:35:11.960452] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.005 [2024-07-15 22:35:11.960457] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.005 [2024-07-15 22:35:11.960462] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.005 [2024-07-15 22:35:11.960481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.975 22:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.975 22:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:48.975 22:35:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.975 22:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.975 22:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.975 22:35:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.975 22:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.rYhEwGqGHy 00:18:48.975 22:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rYhEwGqGHy 00:18:48.975 22:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:48.975 [2024-07-15 22:35:12.824990] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.975 22:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:49.235 22:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:49.235 [2024-07-15 22:35:13.145806] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.235 [2024-07-15 22:35:13.146001] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.235 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:49.502 malloc0 00:18:49.502 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rYhEwGqGHy 00:18:49.759 [2024-07-15 22:35:13.643411] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rYhEwGqGHy 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rYhEwGqGHy' 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=32587 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 32587 /var/tmp/bdevperf.sock 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 32587 ']' 00:18:49.759 22:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.760 22:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.760 22:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.760 22:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.760 22:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.760 [2024-07-15 22:35:13.686044] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:49.760 [2024-07-15 22:35:13.686088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid32587 ] 00:18:49.760 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.018 [2024-07-15 22:35:13.734684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.018 [2024-07-15 22:35:13.806233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.018 22:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.018 22:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:50.018 22:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rYhEwGqGHy 00:18:50.278 [2024-07-15 22:35:14.051253] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.278 [2024-07-15 22:35:14.051320] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:50.278 TLSTESTn1 00:18:50.278 22:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:50.278 Running I/O for 10 seconds... 00:19:02.491 00:19:02.491 Latency(us) 00:19:02.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.491 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:02.491 Verification LBA range: start 0x0 length 0x2000 00:19:02.491 TLSTESTn1 : 10.02 5347.47 20.89 0.00 0.00 23897.44 7208.96 67929.49 00:19:02.491 =================================================================================================================== 00:19:02.491 Total : 5347.47 20.89 0.00 0.00 23897.44 7208.96 67929.49 00:19:02.491 0 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 32587 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 32587 ']' 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 32587 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 32587 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 32587' 00:19:02.491 killing process with pid 32587 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 32587 00:19:02.491 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.491 00:19:02.491 Latency(us) 00:19:02.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.491 =================================================================================================================== 00:19:02.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.491 [2024-07-15 22:35:24.341990] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 32587 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.rYhEwGqGHy 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rYhEwGqGHy 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rYhEwGqGHy 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rYhEwGqGHy 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.rYhEwGqGHy' 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=34414 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 34414 /var/tmp/bdevperf.sock 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 34414 ']' 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:02.491 22:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.491 [2024-07-15 22:35:24.578362] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:02.491 [2024-07-15 22:35:24.578414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid34414 ] 00:19:02.491 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.491 [2024-07-15 22:35:24.628472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.491 [2024-07-15 22:35:24.706964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.491 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:02.491 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:02.491 22:35:25 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rYhEwGqGHy 00:19:02.491 [2024-07-15 22:35:25.530212] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.491 [2024-07-15 22:35:25.530262] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:02.491 [2024-07-15 22:35:25.530270] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.rYhEwGqGHy 00:19:02.491 request: 00:19:02.491 { 00:19:02.491 "name": "TLSTEST", 00:19:02.491 "trtype": "tcp", 00:19:02.491 "traddr": "10.0.0.2", 00:19:02.491 "adrfam": "ipv4", 00:19:02.491 "trsvcid": "4420", 00:19:02.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.491 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.491 "prchk_reftag": false, 00:19:02.491 "prchk_guard": false, 00:19:02.491 "hdgst": false, 00:19:02.491 "ddgst": false, 00:19:02.491 "psk": "/tmp/tmp.rYhEwGqGHy", 00:19:02.491 "method": "bdev_nvme_attach_controller", 00:19:02.491 "req_id": 1 00:19:02.491 } 00:19:02.491 Got JSON-RPC error response 00:19:02.491 response: 00:19:02.491 { 00:19:02.491 "code": -1, 00:19:02.491 "message": "Operation not permitted" 00:19:02.491 } 00:19:02.491 22:35:25 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 34414 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 34414 ']' 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 34414 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 34414 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 34414' 00:19:02.492 killing process with pid 34414 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 34414 00:19:02.492 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.492 00:19:02.492 Latency(us) 00:19:02.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.492 =================================================================================================================== 00:19:02.492 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 34414 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 32324 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 32324 ']' 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 32324 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 32324 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 32324' 00:19:02.492 killing process with pid 32324 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 32324 00:19:02.492 [2024-07-15 22:35:25.805192] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 32324 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:02.492 22:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.492 22:35:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=34663 00:19:02.492 22:35:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 34663 00:19:02.492 22:35:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:02.492 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 34663 ']' 00:19:02.492 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.492 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:02.492 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.492 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:02.492 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.492 [2024-07-15 22:35:26.048073] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:02.492 [2024-07-15 22:35:26.048119] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.492 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.492 [2024-07-15 22:35:26.103006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.492 [2024-07-15 22:35:26.181068] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.492 [2024-07-15 22:35:26.181101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.492 [2024-07-15 22:35:26.181108] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.492 [2024-07-15 22:35:26.181114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.492 [2024-07-15 22:35:26.181119] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.492 [2024-07-15 22:35:26.181159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.rYhEwGqGHy 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rYhEwGqGHy 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.rYhEwGqGHy 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rYhEwGqGHy 00:19:03.061 22:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:03.321 [2024-07-15 22:35:27.040434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.321 22:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:03.321 22:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:03.580 [2024-07-15 22:35:27.393344] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.580 [2024-07-15 22:35:27.393538] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.580 22:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:03.839 malloc0 00:19:03.839 22:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:03.839 22:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rYhEwGqGHy 00:19:04.097 [2024-07-15 22:35:27.894618] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:04.097 [2024-07-15 22:35:27.894642] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:04.097 [2024-07-15 22:35:27.894663] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:04.097 request: 00:19:04.097 { 00:19:04.097 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.097 "host": "nqn.2016-06.io.spdk:host1", 00:19:04.097 "psk": "/tmp/tmp.rYhEwGqGHy", 00:19:04.097 "method": "nvmf_subsystem_add_host", 00:19:04.097 "req_id": 1 00:19:04.097 } 00:19:04.097 Got JSON-RPC error response 00:19:04.097 response: 00:19:04.097 { 00:19:04.097 "code": -32603, 00:19:04.097 "message": "Internal error" 00:19:04.097 } 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 34663 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 34663 ']' 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 34663 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 34663 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 34663' 00:19:04.097 killing process with pid 34663 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 34663 00:19:04.097 22:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 34663 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.rYhEwGqGHy 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=34935 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 34935 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 34935 ']' 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.356 22:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.356 [2024-07-15 22:35:28.207823] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:04.356 [2024-07-15 22:35:28.207869] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.356 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.356 [2024-07-15 22:35:28.264463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.614 [2024-07-15 22:35:28.334315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.614 [2024-07-15 22:35:28.334345] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.614 [2024-07-15 22:35:28.334352] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.614 [2024-07-15 22:35:28.334362] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.614 [2024-07-15 22:35:28.334367] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.614 [2024-07-15 22:35:28.334390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.181 22:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:05.181 22:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:05.181 22:35:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:05.181 22:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:05.181 22:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.181 22:35:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.181 22:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.rYhEwGqGHy 00:19:05.181 22:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rYhEwGqGHy 00:19:05.181 22:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:05.439 [2024-07-15 22:35:29.186648] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.439 22:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:05.439 22:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:05.697 [2024-07-15 22:35:29.523524] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.697 [2024-07-15 22:35:29.523699] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.697 22:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:05.956 malloc0 00:19:05.956 22:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:05.956 22:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rYhEwGqGHy 00:19:06.213 [2024-07-15 22:35:30.041094] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:06.213 22:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:06.213 22:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=35384 00:19:06.213 22:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:06.213 22:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 35384 /var/tmp/bdevperf.sock 00:19:06.213 22:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 35384 ']' 00:19:06.213 22:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.213 22:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:06.213 22:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.213 22:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:06.213 22:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.213 [2024-07-15 22:35:30.089162] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:06.213 [2024-07-15 22:35:30.089213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid35384 ] 00:19:06.213 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.213 [2024-07-15 22:35:30.139181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.470 [2024-07-15 22:35:30.211261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.470 22:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:06.470 22:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:06.471 22:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rYhEwGqGHy 00:19:06.729 [2024-07-15 22:35:30.448332] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.729 [2024-07-15 22:35:30.448409] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:06.729 TLSTESTn1 00:19:06.729 22:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:06.988 22:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:06.988 "subsystems": [ 00:19:06.988 { 00:19:06.988 "subsystem": "keyring", 00:19:06.988 "config": [] 00:19:06.988 }, 00:19:06.988 { 00:19:06.988 "subsystem": "iobuf", 00:19:06.988 "config": [ 00:19:06.988 { 00:19:06.988 "method": "iobuf_set_options", 00:19:06.988 "params": { 00:19:06.988 "small_pool_count": 8192, 00:19:06.988 "large_pool_count": 1024, 00:19:06.988 "small_bufsize": 8192, 00:19:06.988 "large_bufsize": 135168 00:19:06.988 } 00:19:06.988 } 00:19:06.988 ] 00:19:06.988 }, 00:19:06.988 { 00:19:06.988 "subsystem": "sock", 00:19:06.988 "config": [ 00:19:06.988 { 00:19:06.988 "method": "sock_set_default_impl", 00:19:06.988 "params": { 00:19:06.988 "impl_name": "posix" 00:19:06.988 } 00:19:06.988 }, 00:19:06.988 { 00:19:06.988 "method": "sock_impl_set_options", 00:19:06.988 "params": { 00:19:06.988 "impl_name": "ssl", 00:19:06.988 "recv_buf_size": 4096, 00:19:06.988 "send_buf_size": 4096, 00:19:06.988 "enable_recv_pipe": true, 00:19:06.988 "enable_quickack": false, 00:19:06.988 "enable_placement_id": 0, 00:19:06.988 "enable_zerocopy_send_server": true, 00:19:06.988 "enable_zerocopy_send_client": false, 00:19:06.988 "zerocopy_threshold": 0, 00:19:06.988 "tls_version": 0, 00:19:06.988 "enable_ktls": false 00:19:06.988 } 00:19:06.988 }, 00:19:06.988 { 00:19:06.988 "method": "sock_impl_set_options", 00:19:06.988 "params": { 00:19:06.988 "impl_name": "posix", 00:19:06.988 "recv_buf_size": 2097152, 00:19:06.988 "send_buf_size": 2097152, 00:19:06.988 "enable_recv_pipe": true, 00:19:06.988 "enable_quickack": false, 00:19:06.988 "enable_placement_id": 0, 00:19:06.988 "enable_zerocopy_send_server": true, 00:19:06.988 "enable_zerocopy_send_client": false, 00:19:06.988 "zerocopy_threshold": 0, 00:19:06.988 "tls_version": 0, 00:19:06.988 "enable_ktls": false 00:19:06.988 } 00:19:06.988 } 00:19:06.988 ] 00:19:06.988 }, 00:19:06.988 { 00:19:06.988 "subsystem": "vmd", 00:19:06.988 "config": [] 00:19:06.988 }, 00:19:06.988 { 00:19:06.988 "subsystem": "accel", 00:19:06.988 "config": [ 00:19:06.988 { 00:19:06.988 "method": "accel_set_options", 00:19:06.988 "params": { 00:19:06.988 "small_cache_size": 128, 00:19:06.988 "large_cache_size": 16, 00:19:06.988 "task_count": 2048, 00:19:06.988 "sequence_count": 2048, 00:19:06.988 "buf_count": 2048 00:19:06.988 } 00:19:06.988 } 00:19:06.988 ] 00:19:06.988 }, 00:19:06.988 { 00:19:06.988 "subsystem": "bdev", 00:19:06.988 "config": [ 00:19:06.988 { 00:19:06.988 "method": "bdev_set_options", 00:19:06.988 "params": { 00:19:06.988 "bdev_io_pool_size": 65535, 00:19:06.988 "bdev_io_cache_size": 256, 00:19:06.988 "bdev_auto_examine": true, 00:19:06.988 "iobuf_small_cache_size": 128, 00:19:06.988 "iobuf_large_cache_size": 16 00:19:06.988 } 00:19:06.988 }, 00:19:06.988 { 00:19:06.988 "method": "bdev_raid_set_options", 00:19:06.988 "params": { 00:19:06.988 "process_window_size_kb": 1024 00:19:06.988 } 00:19:06.988 }, 00:19:06.988 { 00:19:06.988 "method": "bdev_iscsi_set_options", 00:19:06.988 "params": { 00:19:06.988 "timeout_sec": 30 00:19:06.988 } 00:19:06.988 }, 00:19:06.989 { 00:19:06.989 "method": "bdev_nvme_set_options", 00:19:06.989 "params": { 00:19:06.989 "action_on_timeout": "none", 00:19:06.989 "timeout_us": 0, 00:19:06.989 "timeout_admin_us": 0, 00:19:06.989 "keep_alive_timeout_ms": 10000, 00:19:06.989 "arbitration_burst": 0, 00:19:06.989 "low_priority_weight": 0, 00:19:06.989 "medium_priority_weight": 0, 00:19:06.989 "high_priority_weight": 0, 00:19:06.989 "nvme_adminq_poll_period_us": 10000, 00:19:06.989 "nvme_ioq_poll_period_us": 0, 00:19:06.989 "io_queue_requests": 0, 00:19:06.989 "delay_cmd_submit": true, 00:19:06.989 "transport_retry_count": 4, 00:19:06.989 "bdev_retry_count": 3, 00:19:06.989 "transport_ack_timeout": 0, 00:19:06.989 "ctrlr_loss_timeout_sec": 0, 00:19:06.989 "reconnect_delay_sec": 0, 00:19:06.989 "fast_io_fail_timeout_sec": 0, 00:19:06.989 "disable_auto_failback": false, 00:19:06.989 "generate_uuids": false, 00:19:06.989 "transport_tos": 0, 00:19:06.989 "nvme_error_stat": false, 00:19:06.989 "rdma_srq_size": 0, 00:19:06.989 "io_path_stat": false, 00:19:06.989 "allow_accel_sequence": false, 00:19:06.989 "rdma_max_cq_size": 0, 00:19:06.989 "rdma_cm_event_timeout_ms": 0, 00:19:06.989 "dhchap_digests": [ 00:19:06.989 "sha256", 00:19:06.989 "sha384", 00:19:06.989 "sha512" 00:19:06.989 ], 00:19:06.989 "dhchap_dhgroups": [ 00:19:06.989 "null", 00:19:06.989 "ffdhe2048", 00:19:06.989 "ffdhe3072", 00:19:06.989 "ffdhe4096", 00:19:06.989 "ffdhe6144", 00:19:06.989 "ffdhe8192" 00:19:06.989 ] 00:19:06.989 } 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "method": "bdev_nvme_set_hotplug", 00:19:06.989 "params": { 00:19:06.989 "period_us": 100000, 00:19:06.989 "enable": false 00:19:06.989 } 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "method": "bdev_malloc_create", 00:19:06.989 "params": { 00:19:06.989 "name": "malloc0", 00:19:06.989 "num_blocks": 8192, 00:19:06.989 "block_size": 4096, 00:19:06.989 "physical_block_size": 4096, 00:19:06.989 "uuid": "18aa12fa-6f19-4e72-b43d-c4087417329f", 00:19:06.989 "optimal_io_boundary": 0 00:19:06.989 } 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "method": "bdev_wait_for_examine" 00:19:06.989 } 00:19:06.989 ] 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "subsystem": "nbd", 00:19:06.989 "config": [] 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "subsystem": "scheduler", 00:19:06.989 "config": [ 00:19:06.989 { 00:19:06.989 "method": "framework_set_scheduler", 00:19:06.989 "params": { 00:19:06.989 "name": "static" 00:19:06.989 } 00:19:06.989 } 00:19:06.989 ] 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "subsystem": "nvmf", 00:19:06.989 "config": [ 00:19:06.989 { 00:19:06.989 "method": "nvmf_set_config", 00:19:06.989 "params": { 00:19:06.989 "discovery_filter": "match_any", 00:19:06.989 "admin_cmd_passthru": { 00:19:06.989 "identify_ctrlr": false 00:19:06.989 } 00:19:06.989 } 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "method": "nvmf_set_max_subsystems", 00:19:06.989 "params": { 00:19:06.989 "max_subsystems": 1024 00:19:06.989 } 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "method": "nvmf_set_crdt", 00:19:06.989 "params": { 00:19:06.989 "crdt1": 0, 00:19:06.989 "crdt2": 0, 00:19:06.989 "crdt3": 0 00:19:06.989 } 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "method": "nvmf_create_transport", 00:19:06.989 "params": { 00:19:06.989 "trtype": "TCP", 00:19:06.989 "max_queue_depth": 128, 00:19:06.989 "max_io_qpairs_per_ctrlr": 127, 00:19:06.989 "in_capsule_data_size": 4096, 00:19:06.989 "max_io_size": 131072, 00:19:06.989 "io_unit_size": 131072, 00:19:06.989 "max_aq_depth": 128, 00:19:06.989 "num_shared_buffers": 511, 00:19:06.989 "buf_cache_size": 4294967295, 00:19:06.989 "dif_insert_or_strip": false, 00:19:06.989 "zcopy": false, 00:19:06.989 "c2h_success": false, 00:19:06.989 "sock_priority": 0, 00:19:06.989 "abort_timeout_sec": 1, 00:19:06.989 "ack_timeout": 0, 00:19:06.989 "data_wr_pool_size": 0 00:19:06.989 } 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "method": "nvmf_create_subsystem", 00:19:06.989 "params": { 00:19:06.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.989 "allow_any_host": false, 00:19:06.989 "serial_number": "SPDK00000000000001", 00:19:06.989 "model_number": "SPDK bdev Controller", 00:19:06.989 "max_namespaces": 10, 00:19:06.989 "min_cntlid": 1, 00:19:06.989 "max_cntlid": 65519, 00:19:06.989 "ana_reporting": false 00:19:06.989 } 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "method": "nvmf_subsystem_add_host", 00:19:06.989 "params": { 00:19:06.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.989 "host": "nqn.2016-06.io.spdk:host1", 00:19:06.989 "psk": "/tmp/tmp.rYhEwGqGHy" 00:19:06.989 } 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "method": "nvmf_subsystem_add_ns", 00:19:06.989 "params": { 00:19:06.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.989 "namespace": { 00:19:06.989 "nsid": 1, 00:19:06.989 "bdev_name": "malloc0", 00:19:06.989 "nguid": "18AA12FA6F194E72B43DC4087417329F", 00:19:06.989 "uuid": "18aa12fa-6f19-4e72-b43d-c4087417329f", 00:19:06.989 "no_auto_visible": false 00:19:06.989 } 00:19:06.989 } 00:19:06.989 }, 00:19:06.989 { 00:19:06.989 "method": "nvmf_subsystem_add_listener", 00:19:06.989 "params": { 00:19:06.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.989 "listen_address": { 00:19:06.989 "trtype": "TCP", 00:19:06.989 "adrfam": "IPv4", 00:19:06.989 "traddr": "10.0.0.2", 00:19:06.989 "trsvcid": "4420" 00:19:06.989 }, 00:19:06.989 "secure_channel": true 00:19:06.989 } 00:19:06.989 } 00:19:06.989 ] 00:19:06.989 } 00:19:06.989 ] 00:19:06.989 }' 00:19:06.989 22:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:07.248 22:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:07.248 "subsystems": [ 00:19:07.248 { 00:19:07.248 "subsystem": "keyring", 00:19:07.248 "config": [] 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "subsystem": "iobuf", 00:19:07.248 "config": [ 00:19:07.248 { 00:19:07.248 "method": "iobuf_set_options", 00:19:07.248 "params": { 00:19:07.248 "small_pool_count": 8192, 00:19:07.248 "large_pool_count": 1024, 00:19:07.248 "small_bufsize": 8192, 00:19:07.248 "large_bufsize": 135168 00:19:07.248 } 00:19:07.248 } 00:19:07.248 ] 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "subsystem": "sock", 00:19:07.248 "config": [ 00:19:07.248 { 00:19:07.248 "method": "sock_set_default_impl", 00:19:07.248 "params": { 00:19:07.248 "impl_name": "posix" 00:19:07.248 } 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "method": "sock_impl_set_options", 00:19:07.248 "params": { 00:19:07.248 "impl_name": "ssl", 00:19:07.248 "recv_buf_size": 4096, 00:19:07.248 "send_buf_size": 4096, 00:19:07.248 "enable_recv_pipe": true, 00:19:07.248 "enable_quickack": false, 00:19:07.248 "enable_placement_id": 0, 00:19:07.248 "enable_zerocopy_send_server": true, 00:19:07.248 "enable_zerocopy_send_client": false, 00:19:07.248 "zerocopy_threshold": 0, 00:19:07.248 "tls_version": 0, 00:19:07.248 "enable_ktls": false 00:19:07.248 } 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "method": "sock_impl_set_options", 00:19:07.248 "params": { 00:19:07.248 "impl_name": "posix", 00:19:07.248 "recv_buf_size": 2097152, 00:19:07.248 "send_buf_size": 2097152, 00:19:07.248 "enable_recv_pipe": true, 00:19:07.248 "enable_quickack": false, 00:19:07.248 "enable_placement_id": 0, 00:19:07.248 "enable_zerocopy_send_server": true, 00:19:07.248 "enable_zerocopy_send_client": false, 00:19:07.248 "zerocopy_threshold": 0, 00:19:07.248 "tls_version": 0, 00:19:07.248 "enable_ktls": false 00:19:07.248 } 00:19:07.248 } 00:19:07.248 ] 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "subsystem": "vmd", 00:19:07.248 "config": [] 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "subsystem": "accel", 00:19:07.248 "config": [ 00:19:07.248 { 00:19:07.248 "method": "accel_set_options", 00:19:07.248 "params": { 00:19:07.248 "small_cache_size": 128, 00:19:07.248 "large_cache_size": 16, 00:19:07.248 "task_count": 2048, 00:19:07.248 "sequence_count": 2048, 00:19:07.248 "buf_count": 2048 00:19:07.248 } 00:19:07.248 } 00:19:07.248 ] 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "subsystem": "bdev", 00:19:07.248 "config": [ 00:19:07.248 { 00:19:07.248 "method": "bdev_set_options", 00:19:07.248 "params": { 00:19:07.248 "bdev_io_pool_size": 65535, 00:19:07.248 "bdev_io_cache_size": 256, 00:19:07.248 "bdev_auto_examine": true, 00:19:07.248 "iobuf_small_cache_size": 128, 00:19:07.248 "iobuf_large_cache_size": 16 00:19:07.248 } 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "method": "bdev_raid_set_options", 00:19:07.248 "params": { 00:19:07.248 "process_window_size_kb": 1024 00:19:07.248 } 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "method": "bdev_iscsi_set_options", 00:19:07.248 "params": { 00:19:07.248 "timeout_sec": 30 00:19:07.248 } 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "method": "bdev_nvme_set_options", 00:19:07.248 "params": { 00:19:07.248 "action_on_timeout": "none", 00:19:07.248 "timeout_us": 0, 00:19:07.248 "timeout_admin_us": 0, 00:19:07.248 "keep_alive_timeout_ms": 10000, 00:19:07.248 "arbitration_burst": 0, 00:19:07.248 "low_priority_weight": 0, 00:19:07.248 "medium_priority_weight": 0, 00:19:07.248 "high_priority_weight": 0, 00:19:07.248 "nvme_adminq_poll_period_us": 10000, 00:19:07.248 "nvme_ioq_poll_period_us": 0, 00:19:07.248 "io_queue_requests": 512, 00:19:07.248 "delay_cmd_submit": true, 00:19:07.248 "transport_retry_count": 4, 00:19:07.248 "bdev_retry_count": 3, 00:19:07.248 "transport_ack_timeout": 0, 00:19:07.248 "ctrlr_loss_timeout_sec": 0, 00:19:07.248 "reconnect_delay_sec": 0, 00:19:07.248 "fast_io_fail_timeout_sec": 0, 00:19:07.248 "disable_auto_failback": false, 00:19:07.248 "generate_uuids": false, 00:19:07.248 "transport_tos": 0, 00:19:07.248 "nvme_error_stat": false, 00:19:07.248 "rdma_srq_size": 0, 00:19:07.248 "io_path_stat": false, 00:19:07.248 "allow_accel_sequence": false, 00:19:07.248 "rdma_max_cq_size": 0, 00:19:07.248 "rdma_cm_event_timeout_ms": 0, 00:19:07.248 "dhchap_digests": [ 00:19:07.248 "sha256", 00:19:07.248 "sha384", 00:19:07.248 "sha512" 00:19:07.248 ], 00:19:07.248 "dhchap_dhgroups": [ 00:19:07.248 "null", 00:19:07.248 "ffdhe2048", 00:19:07.248 "ffdhe3072", 00:19:07.248 "ffdhe4096", 00:19:07.248 "ffdhe6144", 00:19:07.248 "ffdhe8192" 00:19:07.248 ] 00:19:07.248 } 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "method": "bdev_nvme_attach_controller", 00:19:07.248 "params": { 00:19:07.248 "name": "TLSTEST", 00:19:07.248 "trtype": "TCP", 00:19:07.248 "adrfam": "IPv4", 00:19:07.248 "traddr": "10.0.0.2", 00:19:07.248 "trsvcid": "4420", 00:19:07.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.248 "prchk_reftag": false, 00:19:07.248 "prchk_guard": false, 00:19:07.248 "ctrlr_loss_timeout_sec": 0, 00:19:07.248 "reconnect_delay_sec": 0, 00:19:07.248 "fast_io_fail_timeout_sec": 0, 00:19:07.248 "psk": "/tmp/tmp.rYhEwGqGHy", 00:19:07.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.248 "hdgst": false, 00:19:07.248 "ddgst": false 00:19:07.248 } 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "method": "bdev_nvme_set_hotplug", 00:19:07.248 "params": { 00:19:07.248 "period_us": 100000, 00:19:07.248 "enable": false 00:19:07.248 } 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "method": "bdev_wait_for_examine" 00:19:07.248 } 00:19:07.248 ] 00:19:07.248 }, 00:19:07.248 { 00:19:07.248 "subsystem": "nbd", 00:19:07.248 "config": [] 00:19:07.248 } 00:19:07.248 ] 00:19:07.248 }' 00:19:07.248 22:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 35384 00:19:07.248 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 35384 ']' 00:19:07.248 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 35384 00:19:07.248 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:07.248 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:07.248 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 35384 00:19:07.248 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:07.248 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:07.248 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 35384' 00:19:07.248 killing process with pid 35384 00:19:07.248 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 35384 00:19:07.248 Received shutdown signal, test time was about 10.000000 seconds 00:19:07.248 00:19:07.248 Latency(us) 00:19:07.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.248 =================================================================================================================== 00:19:07.248 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:07.248 [2024-07-15 22:35:31.061629] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:07.248 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 35384 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 34935 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 34935 ']' 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 34935 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 34935 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 34935' 00:19:07.508 killing process with pid 34935 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 34935 00:19:07.508 [2024-07-15 22:35:31.285482] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 34935 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:07.508 22:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:07.508 "subsystems": [ 00:19:07.508 { 00:19:07.508 "subsystem": "keyring", 00:19:07.508 "config": [] 00:19:07.508 }, 00:19:07.508 { 00:19:07.508 "subsystem": "iobuf", 00:19:07.508 "config": [ 00:19:07.508 { 00:19:07.508 "method": "iobuf_set_options", 00:19:07.508 "params": { 00:19:07.508 "small_pool_count": 8192, 00:19:07.508 "large_pool_count": 1024, 00:19:07.508 "small_bufsize": 8192, 00:19:07.508 "large_bufsize": 135168 00:19:07.508 } 00:19:07.508 } 00:19:07.508 ] 00:19:07.508 }, 00:19:07.508 { 00:19:07.508 "subsystem": "sock", 00:19:07.508 "config": [ 00:19:07.508 { 00:19:07.508 "method": "sock_set_default_impl", 00:19:07.508 "params": { 00:19:07.508 "impl_name": "posix" 00:19:07.508 } 00:19:07.508 }, 00:19:07.508 { 00:19:07.508 "method": "sock_impl_set_options", 00:19:07.508 "params": { 00:19:07.508 "impl_name": "ssl", 00:19:07.508 "recv_buf_size": 4096, 00:19:07.508 "send_buf_size": 4096, 00:19:07.508 "enable_recv_pipe": true, 00:19:07.508 "enable_quickack": false, 00:19:07.508 "enable_placement_id": 0, 00:19:07.508 "enable_zerocopy_send_server": true, 00:19:07.508 "enable_zerocopy_send_client": false, 00:19:07.508 "zerocopy_threshold": 0, 00:19:07.508 "tls_version": 0, 00:19:07.508 "enable_ktls": false 00:19:07.508 } 00:19:07.508 }, 00:19:07.508 { 00:19:07.508 "method": "sock_impl_set_options", 00:19:07.508 "params": { 00:19:07.508 "impl_name": "posix", 00:19:07.508 "recv_buf_size": 2097152, 00:19:07.508 "send_buf_size": 2097152, 00:19:07.508 "enable_recv_pipe": true, 00:19:07.508 "enable_quickack": false, 00:19:07.508 "enable_placement_id": 0, 00:19:07.508 "enable_zerocopy_send_server": true, 00:19:07.508 "enable_zerocopy_send_client": false, 00:19:07.508 "zerocopy_threshold": 0, 00:19:07.508 "tls_version": 0, 00:19:07.508 "enable_ktls": false 00:19:07.508 } 00:19:07.508 } 00:19:07.508 ] 00:19:07.508 }, 00:19:07.508 { 00:19:07.508 "subsystem": "vmd", 00:19:07.508 "config": [] 00:19:07.508 }, 00:19:07.508 { 00:19:07.508 "subsystem": "accel", 00:19:07.508 "config": [ 00:19:07.508 { 00:19:07.508 "method": "accel_set_options", 00:19:07.508 "params": { 00:19:07.508 "small_cache_size": 128, 00:19:07.508 "large_cache_size": 16, 00:19:07.508 "task_count": 2048, 00:19:07.508 "sequence_count": 2048, 00:19:07.508 "buf_count": 2048 00:19:07.508 } 00:19:07.508 } 00:19:07.508 ] 00:19:07.508 }, 00:19:07.508 { 00:19:07.508 "subsystem": "bdev", 00:19:07.508 "config": [ 00:19:07.508 { 00:19:07.508 "method": "bdev_set_options", 00:19:07.508 "params": { 00:19:07.508 "bdev_io_pool_size": 65535, 00:19:07.508 "bdev_io_cache_size": 256, 00:19:07.508 "bdev_auto_examine": true, 00:19:07.508 "iobuf_small_cache_size": 128, 00:19:07.508 "iobuf_large_cache_size": 16 00:19:07.508 } 00:19:07.508 }, 00:19:07.508 { 00:19:07.508 "method": "bdev_raid_set_options", 00:19:07.508 "params": { 00:19:07.508 "process_window_size_kb": 1024 00:19:07.508 } 00:19:07.508 }, 00:19:07.508 { 00:19:07.508 "method": "bdev_iscsi_set_options", 00:19:07.508 "params": { 00:19:07.508 "timeout_sec": 30 00:19:07.508 } 00:19:07.508 }, 00:19:07.508 { 00:19:07.508 "method": "bdev_nvme_set_options", 00:19:07.508 "params": { 00:19:07.508 "action_on_timeout": "none", 00:19:07.508 "timeout_us": 0, 00:19:07.508 "timeout_admin_us": 0, 00:19:07.508 "keep_alive_timeout_ms": 10000, 00:19:07.508 "arbitration_burst": 0, 00:19:07.508 "low_priority_weight": 0, 00:19:07.508 "medium_priority_weight": 0, 00:19:07.508 "high_priority_weight": 0, 00:19:07.508 "nvme_adminq_poll_period_us": 10000, 00:19:07.508 "nvme_ioq_poll_period_us": 0, 00:19:07.508 "io_queue_requests": 0, 00:19:07.508 "delay_cmd_submit": true, 00:19:07.508 "transport_retry_count": 4, 00:19:07.508 "bdev_retry_count": 3, 00:19:07.508 "transport_ack_timeout": 0, 00:19:07.508 "ctrlr_loss_timeout_sec": 0, 00:19:07.508 "reconnect_delay_sec": 0, 00:19:07.508 "fast_io_fail_timeout_sec": 0, 00:19:07.508 "disable_auto_failback": false, 00:19:07.508 "generate_uuids": false, 00:19:07.508 "transport_tos": 0, 00:19:07.508 "nvme_error_stat": false, 00:19:07.508 "rdma_srq_size": 0, 00:19:07.508 "io_path_stat": false, 00:19:07.508 "allow_accel_sequence": false, 00:19:07.508 "rdma_max_cq_size": 0, 00:19:07.508 "rdma_cm_event_timeout_ms": 0, 00:19:07.508 "dhchap_digests": [ 00:19:07.508 "sha256", 00:19:07.508 "sha384", 00:19:07.508 "sha512" 00:19:07.508 ], 00:19:07.508 "dhchap_dhgroups": [ 00:19:07.508 "null", 00:19:07.508 "ffdhe2048", 00:19:07.508 "ffdhe3072", 00:19:07.508 "ffdhe4096", 00:19:07.508 "ffdhe6144", 00:19:07.508 "ffdhe8192" 00:19:07.508 ] 00:19:07.508 } 00:19:07.508 }, 00:19:07.508 { 00:19:07.508 "method": "bdev_nvme_set_hotplug", 00:19:07.508 "params": { 00:19:07.508 "period_us": 100000, 00:19:07.509 "enable": false 00:19:07.509 } 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "method": "bdev_malloc_create", 00:19:07.509 "params": { 00:19:07.509 "name": "malloc0", 00:19:07.509 "num_blocks": 8192, 00:19:07.509 "block_size": 4096, 00:19:07.509 "physical_block_size": 4096, 00:19:07.509 "uuid": "18aa12fa-6f19-4e72-b43d-c4087417329f", 00:19:07.509 "optimal_io_boundary": 0 00:19:07.509 } 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "method": "bdev_wait_for_examine" 00:19:07.509 } 00:19:07.509 ] 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "subsystem": "nbd", 00:19:07.509 "config": [] 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "subsystem": "scheduler", 00:19:07.509 "config": [ 00:19:07.509 { 00:19:07.509 "method": "framework_set_scheduler", 00:19:07.509 "params": { 00:19:07.509 "name": "static" 00:19:07.509 } 00:19:07.509 } 00:19:07.509 ] 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "subsystem": "nvmf", 00:19:07.509 "config": [ 00:19:07.509 { 00:19:07.509 "method": "nvmf_set_config", 00:19:07.509 "params": { 00:19:07.509 "discovery_filter": "match_any", 00:19:07.509 "admin_cmd_passthru": { 00:19:07.509 "identify_ctrlr": false 00:19:07.509 } 00:19:07.509 } 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "method": "nvmf_set_max_subsystems", 00:19:07.509 "params": { 00:19:07.509 "max_subsystems": 1024 00:19:07.509 } 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "method": "nvmf_set_crdt", 00:19:07.509 "params": { 00:19:07.509 "crdt1": 0, 00:19:07.509 "crdt2": 0, 00:19:07.509 "crdt3": 0 00:19:07.509 } 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "method": "nvmf_create_transport", 00:19:07.509 "params": { 00:19:07.509 "trtype": "TCP", 00:19:07.509 "max_queue_depth": 128, 00:19:07.509 "max_io_qpairs_per_ctrlr": 127, 00:19:07.509 "in_capsule_data_size": 4096, 00:19:07.509 "max_io_size": 131072, 00:19:07.509 "io_unit_size": 131072, 00:19:07.509 "max_aq_depth": 128, 00:19:07.509 "num_shared_buffers": 511, 00:19:07.509 "buf_cache_size": 4294967295, 00:19:07.509 "dif_insert_or_strip": false, 00:19:07.509 "zcopy": false, 00:19:07.509 "c2h_success": false, 00:19:07.509 "sock_priority": 0, 00:19:07.509 "abort_timeout_sec": 1, 00:19:07.509 "ack_timeout": 0, 00:19:07.509 "data_wr_pool_size": 0 00:19:07.509 } 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "method": "nvmf_create_subsystem", 00:19:07.509 "params": { 00:19:07.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.509 "allow_any_host": false, 00:19:07.509 "serial_number": "SPDK00000000000001", 00:19:07.509 "model_number": "SPDK bdev Controller", 00:19:07.509 "max_namespaces": 10, 00:19:07.509 "min_cntlid": 1, 00:19:07.509 "max_cntlid": 65519, 00:19:07.509 "ana_reporting": false 00:19:07.509 } 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "method": "nvmf_subsystem_add_host", 00:19:07.509 "params": { 00:19:07.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.509 "host": "nqn.2016-06.io.spdk:host1", 00:19:07.509 "psk": "/tmp/tmp.rYhEwGqGHy" 00:19:07.509 } 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "method": "nvmf_subsystem_add_ns", 00:19:07.509 "params": { 00:19:07.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.509 "namespace": { 00:19:07.509 "nsid": 1, 00:19:07.509 "bdev_name": "malloc0", 00:19:07.509 "nguid": "18AA12FA6F194E72B43DC4087417329F", 00:19:07.509 "uuid": "18aa12fa-6f19-4e72-b43d-c4087417329f", 00:19:07.509 "no_auto_visible": false 00:19:07.509 } 00:19:07.509 } 00:19:07.509 }, 00:19:07.509 { 00:19:07.509 "method": "nvmf_subsystem_add_listener", 00:19:07.509 "params": { 00:19:07.509 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.509 "listen_address": { 00:19:07.509 "trtype": "TCP", 00:19:07.509 "adrfam": "IPv4", 00:19:07.509 "traddr": "10.0.0.2", 00:19:07.509 "trsvcid": "4420" 00:19:07.509 }, 00:19:07.509 "secure_channel": true 00:19:07.509 } 00:19:07.509 } 00:19:07.509 ] 00:19:07.509 } 00:19:07.509 ] 00:19:07.509 }' 00:19:07.509 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.768 22:35:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=35644 00:19:07.768 22:35:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 35644 00:19:07.768 22:35:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:07.768 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 35644 ']' 00:19:07.768 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.768 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.768 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.768 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.768 22:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.768 [2024-07-15 22:35:31.532882] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:07.768 [2024-07-15 22:35:31.532928] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.768 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.768 [2024-07-15 22:35:31.589530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.768 [2024-07-15 22:35:31.659541] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.768 [2024-07-15 22:35:31.659581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.768 [2024-07-15 22:35:31.659588] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.768 [2024-07-15 22:35:31.659594] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.768 [2024-07-15 22:35:31.659599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.768 [2024-07-15 22:35:31.659654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.027 [2024-07-15 22:35:31.862575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.027 [2024-07-15 22:35:31.878549] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:08.027 [2024-07-15 22:35:31.894604] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:08.027 [2024-07-15 22:35:31.905566] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=35685 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 35685 /var/tmp/bdevperf.sock 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 35685 ']' 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:08.595 "subsystems": [ 00:19:08.595 { 00:19:08.595 "subsystem": "keyring", 00:19:08.595 "config": [] 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "subsystem": "iobuf", 00:19:08.595 "config": [ 00:19:08.595 { 00:19:08.595 "method": "iobuf_set_options", 00:19:08.595 "params": { 00:19:08.595 "small_pool_count": 8192, 00:19:08.595 "large_pool_count": 1024, 00:19:08.595 "small_bufsize": 8192, 00:19:08.595 "large_bufsize": 135168 00:19:08.595 } 00:19:08.595 } 00:19:08.595 ] 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "subsystem": "sock", 00:19:08.595 "config": [ 00:19:08.595 { 00:19:08.595 "method": "sock_set_default_impl", 00:19:08.595 "params": { 00:19:08.595 "impl_name": "posix" 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "sock_impl_set_options", 00:19:08.595 "params": { 00:19:08.595 "impl_name": "ssl", 00:19:08.595 "recv_buf_size": 4096, 00:19:08.595 "send_buf_size": 4096, 00:19:08.595 "enable_recv_pipe": true, 00:19:08.595 "enable_quickack": false, 00:19:08.595 "enable_placement_id": 0, 00:19:08.595 "enable_zerocopy_send_server": true, 00:19:08.595 "enable_zerocopy_send_client": false, 00:19:08.595 "zerocopy_threshold": 0, 00:19:08.595 "tls_version": 0, 00:19:08.595 "enable_ktls": false 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "sock_impl_set_options", 00:19:08.595 "params": { 00:19:08.595 "impl_name": "posix", 00:19:08.595 "recv_buf_size": 2097152, 00:19:08.595 "send_buf_size": 2097152, 00:19:08.595 "enable_recv_pipe": true, 00:19:08.595 "enable_quickack": false, 00:19:08.595 "enable_placement_id": 0, 00:19:08.595 "enable_zerocopy_send_server": true, 00:19:08.595 "enable_zerocopy_send_client": false, 00:19:08.595 "zerocopy_threshold": 0, 00:19:08.595 "tls_version": 0, 00:19:08.595 "enable_ktls": false 00:19:08.595 } 00:19:08.595 } 00:19:08.595 ] 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "subsystem": "vmd", 00:19:08.595 "config": [] 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "subsystem": "accel", 00:19:08.595 "config": [ 00:19:08.595 { 00:19:08.595 "method": "accel_set_options", 00:19:08.595 "params": { 00:19:08.595 "small_cache_size": 128, 00:19:08.595 "large_cache_size": 16, 00:19:08.595 "task_count": 2048, 00:19:08.595 "sequence_count": 2048, 00:19:08.595 "buf_count": 2048 00:19:08.595 } 00:19:08.595 } 00:19:08.595 ] 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "subsystem": "bdev", 00:19:08.595 "config": [ 00:19:08.595 { 00:19:08.595 "method": "bdev_set_options", 00:19:08.595 "params": { 00:19:08.595 "bdev_io_pool_size": 65535, 00:19:08.595 "bdev_io_cache_size": 256, 00:19:08.595 "bdev_auto_examine": true, 00:19:08.595 "iobuf_small_cache_size": 128, 00:19:08.595 "iobuf_large_cache_size": 16 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "bdev_raid_set_options", 00:19:08.595 "params": { 00:19:08.595 "process_window_size_kb": 1024 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "bdev_iscsi_set_options", 00:19:08.595 "params": { 00:19:08.595 "timeout_sec": 30 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "bdev_nvme_set_options", 00:19:08.595 "params": { 00:19:08.595 "action_on_timeout": "none", 00:19:08.595 "timeout_us": 0, 00:19:08.595 "timeout_admin_us": 0, 00:19:08.595 "keep_alive_timeout_ms": 10000, 00:19:08.595 "arbitration_burst": 0, 00:19:08.595 "low_priority_weight": 0, 00:19:08.595 "medium_priority_weight": 0, 00:19:08.595 "high_priority_weight": 0, 00:19:08.595 "nvme_adminq_poll_period_us": 10000, 00:19:08.595 "nvme_ioq_poll_period_us": 0, 00:19:08.595 "io_queue_requests": 512, 00:19:08.595 "delay_cmd_submit": true, 00:19:08.595 "transport_retry_count": 4, 00:19:08.595 "bdev_retry_count": 3, 00:19:08.595 "transport_ack_timeout": 0, 00:19:08.595 "ctrlr_loss_timeout_sec": 0, 00:19:08.595 "reconnect_delay_sec": 0, 00:19:08.595 "fast_io_fail_timeout_sec": 0, 00:19:08.595 "disable_auto_failback": false, 00:19:08.595 "generate_uuids": false, 00:19:08.595 "transport_tos": 0, 00:19:08.595 "nvme_error_stat": false, 00:19:08.595 "rdma_srq_size": 0, 00:19:08.595 "io_path_stat": false, 00:19:08.595 "allow_accel_sequence": false, 00:19:08.595 "rdma_max_cq_size": 0, 00:19:08.595 "rdma_cm_event_timeout_ms": 0, 00:19:08.595 "dhchap_digests": [ 00:19:08.595 "sha256", 00:19:08.595 "sha384", 00:19:08.595 "sha512" 00:19:08.595 ], 00:19:08.595 "dhchap_dhgroups": [ 00:19:08.595 "null", 00:19:08.595 "ffdhe2048", 00:19:08.595 "ffdhe3072", 00:19:08.595 "ffdhe4096", 00:19:08.595 "ffdhe6144", 00:19:08.595 "ffdhe8192" 00:19:08.595 ] 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "bdev_nvme_attach_controller", 00:19:08.595 "params": { 00:19:08.595 "name": "TLSTEST", 00:19:08.595 "trtype": "TCP", 00:19:08.595 "adrfam": "IPv4", 00:19:08.595 "traddr": "10.0.0.2", 00:19:08.595 "trsvcid": "4420", 00:19:08.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.595 "prchk_reftag": false, 00:19:08.595 "prchk_guard": false, 00:19:08.595 "ctrlr_loss_timeout_sec": 0, 00:19:08.595 "reconnect_delay_sec": 0, 00:19:08.595 "fast_io_fail_timeout_sec": 0, 00:19:08.595 "psk": "/tmp/tmp.rYhEwGqGHy", 00:19:08.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.595 "hdgst": false, 00:19:08.595 "ddgst": false 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "bdev_nvme_set_hotplug", 00:19:08.595 "params": { 00:19:08.595 "period_us": 100000, 00:19:08.595 "enable": false 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "bdev_wait_for_examine" 00:19:08.595 } 00:19:08.595 ] 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "subsystem": "nbd", 00:19:08.595 "config": [] 00:19:08.595 } 00:19:08.595 ] 00:19:08.595 }' 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.595 22:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.595 [2024-07-15 22:35:32.403900] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:08.596 [2024-07-15 22:35:32.403945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid35685 ] 00:19:08.596 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.596 [2024-07-15 22:35:32.452491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.596 [2024-07-15 22:35:32.527763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.854 [2024-07-15 22:35:32.670691] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:08.854 [2024-07-15 22:35:32.670770] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:09.422 22:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.422 22:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:09.422 22:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:09.422 Running I/O for 10 seconds... 00:19:19.473 00:19:19.473 Latency(us) 00:19:19.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.473 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:19.473 Verification LBA range: start 0x0 length 0x2000 00:19:19.473 TLSTESTn1 : 10.03 2597.36 10.15 0.00 0.00 49195.18 6895.53 75223.93 00:19:19.473 =================================================================================================================== 00:19:19.473 Total : 2597.36 10.15 0.00 0.00 49195.18 6895.53 75223.93 00:19:19.473 0 00:19:19.473 22:35:43 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:19.473 22:35:43 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 35685 00:19:19.473 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 35685 ']' 00:19:19.473 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 35685 00:19:19.473 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:19.473 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.473 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 35685 00:19:19.473 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:19.473 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:19.473 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 35685' 00:19:19.473 killing process with pid 35685 00:19:19.473 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 35685 00:19:19.473 Received shutdown signal, test time was about 10.000000 seconds 00:19:19.473 00:19:19.473 Latency(us) 00:19:19.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.473 =================================================================================================================== 00:19:19.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.473 [2024-07-15 22:35:43.388732] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:19.473 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 35685 00:19:19.731 22:35:43 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 35644 00:19:19.731 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 35644 ']' 00:19:19.731 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 35644 00:19:19.731 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:19.731 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.731 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 35644 00:19:19.731 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:19.731 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:19.732 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 35644' 00:19:19.732 killing process with pid 35644 00:19:19.732 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 35644 00:19:19.732 [2024-07-15 22:35:43.617458] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:19.732 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 35644 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=37540 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 37540 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 37540 ']' 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.990 22:35:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.990 [2024-07-15 22:35:43.864988] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:19.990 [2024-07-15 22:35:43.865045] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.990 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.990 [2024-07-15 22:35:43.923572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.249 [2024-07-15 22:35:44.000758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.249 [2024-07-15 22:35:44.000797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.249 [2024-07-15 22:35:44.000804] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.249 [2024-07-15 22:35:44.000810] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.249 [2024-07-15 22:35:44.000816] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.249 [2024-07-15 22:35:44.000834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.816 22:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.816 22:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:20.816 22:35:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:20.816 22:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:20.816 22:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.816 22:35:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.816 22:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.rYhEwGqGHy 00:19:20.816 22:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.rYhEwGqGHy 00:19:20.816 22:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:21.075 [2024-07-15 22:35:44.856942] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.075 22:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:21.334 22:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:21.334 [2024-07-15 22:35:45.201822] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:21.334 [2024-07-15 22:35:45.202021] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.334 22:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:21.593 malloc0 00:19:21.594 22:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:21.853 22:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.rYhEwGqGHy 00:19:21.853 [2024-07-15 22:35:45.735587] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:21.853 22:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=38014 00:19:21.853 22:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:21.853 22:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.853 22:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 38014 /var/tmp/bdevperf.sock 00:19:21.853 22:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 38014 ']' 00:19:21.853 22:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.853 22:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.853 22:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.853 22:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.853 22:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.853 [2024-07-15 22:35:45.801211] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:21.853 [2024-07-15 22:35:45.801263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid38014 ] 00:19:21.853 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.112 [2024-07-15 22:35:45.855378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.112 [2024-07-15 22:35:45.932101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.681 22:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.681 22:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:22.681 22:35:46 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rYhEwGqGHy 00:19:22.941 22:35:46 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:23.200 [2024-07-15 22:35:46.924459] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.200 nvme0n1 00:19:23.200 22:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:23.200 Running I/O for 1 seconds... 00:19:24.578 00:19:24.578 Latency(us) 00:19:24.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.578 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:24.578 Verification LBA range: start 0x0 length 0x2000 00:19:24.578 nvme0n1 : 1.02 5318.67 20.78 0.00 0.00 23849.75 6924.02 49009.53 00:19:24.578 =================================================================================================================== 00:19:24.578 Total : 5318.67 20.78 0.00 0.00 23849.75 6924.02 49009.53 00:19:24.578 0 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 38014 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 38014 ']' 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 38014 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 38014 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 38014' 00:19:24.578 killing process with pid 38014 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 38014 00:19:24.578 Received shutdown signal, test time was about 1.000000 seconds 00:19:24.578 00:19:24.578 Latency(us) 00:19:24.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.578 =================================================================================================================== 00:19:24.578 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 38014 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 37540 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 37540 ']' 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 37540 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 37540 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 37540' 00:19:24.578 killing process with pid 37540 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 37540 00:19:24.578 [2024-07-15 22:35:48.401119] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:24.578 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 37540 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=38490 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 38490 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 38490 ']' 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.838 22:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.838 [2024-07-15 22:35:48.646083] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:24.838 [2024-07-15 22:35:48.646130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.838 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.838 [2024-07-15 22:35:48.701911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.838 [2024-07-15 22:35:48.779506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.838 [2024-07-15 22:35:48.779544] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.838 [2024-07-15 22:35:48.779551] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.838 [2024-07-15 22:35:48.779557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.838 [2024-07-15 22:35:48.779562] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.838 [2024-07-15 22:35:48.779578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.776 [2024-07-15 22:35:49.474871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.776 malloc0 00:19:25.776 [2024-07-15 22:35:49.503356] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:25.776 [2024-07-15 22:35:49.503546] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=38543 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 38543 /var/tmp/bdevperf.sock 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 38543 ']' 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:25.776 22:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.776 [2024-07-15 22:35:49.576282] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:25.776 [2024-07-15 22:35:49.576322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid38543 ] 00:19:25.776 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.776 [2024-07-15 22:35:49.629758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.776 [2024-07-15 22:35:49.708815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.715 22:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.715 22:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:26.715 22:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rYhEwGqGHy 00:19:26.715 22:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:26.974 [2024-07-15 22:35:50.704799] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.974 nvme0n1 00:19:26.974 22:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:26.974 Running I/O for 1 seconds... 00:19:28.351 00:19:28.351 Latency(us) 00:19:28.351 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.351 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:28.351 Verification LBA range: start 0x0 length 0x2000 00:19:28.351 nvme0n1 : 1.02 5138.34 20.07 0.00 0.00 24654.57 5784.26 79327.05 00:19:28.351 =================================================================================================================== 00:19:28.351 Total : 5138.34 20.07 0.00 0.00 24654.57 5784.26 79327.05 00:19:28.351 0 00:19:28.351 22:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:28.351 22:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.351 22:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.351 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.351 22:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:28.351 "subsystems": [ 00:19:28.351 { 00:19:28.351 "subsystem": "keyring", 00:19:28.351 "config": [ 00:19:28.351 { 00:19:28.351 "method": "keyring_file_add_key", 00:19:28.351 "params": { 00:19:28.351 "name": "key0", 00:19:28.351 "path": "/tmp/tmp.rYhEwGqGHy" 00:19:28.351 } 00:19:28.351 } 00:19:28.351 ] 00:19:28.351 }, 00:19:28.351 { 00:19:28.351 "subsystem": "iobuf", 00:19:28.351 "config": [ 00:19:28.351 { 00:19:28.351 "method": "iobuf_set_options", 00:19:28.351 "params": { 00:19:28.351 "small_pool_count": 8192, 00:19:28.351 "large_pool_count": 1024, 00:19:28.351 "small_bufsize": 8192, 00:19:28.351 "large_bufsize": 135168 00:19:28.351 } 00:19:28.351 } 00:19:28.351 ] 00:19:28.351 }, 00:19:28.351 { 00:19:28.351 "subsystem": "sock", 00:19:28.351 "config": [ 00:19:28.351 { 00:19:28.351 "method": "sock_set_default_impl", 00:19:28.351 "params": { 00:19:28.351 "impl_name": "posix" 00:19:28.351 } 00:19:28.351 }, 00:19:28.351 { 00:19:28.351 "method": "sock_impl_set_options", 00:19:28.351 "params": { 00:19:28.351 "impl_name": "ssl", 00:19:28.351 "recv_buf_size": 4096, 00:19:28.351 "send_buf_size": 4096, 00:19:28.351 "enable_recv_pipe": true, 00:19:28.351 "enable_quickack": false, 00:19:28.351 "enable_placement_id": 0, 00:19:28.351 "enable_zerocopy_send_server": true, 00:19:28.351 "enable_zerocopy_send_client": false, 00:19:28.351 "zerocopy_threshold": 0, 00:19:28.351 "tls_version": 0, 00:19:28.351 "enable_ktls": false 00:19:28.351 } 00:19:28.351 }, 00:19:28.351 { 00:19:28.351 "method": "sock_impl_set_options", 00:19:28.351 "params": { 00:19:28.351 "impl_name": "posix", 00:19:28.351 "recv_buf_size": 2097152, 00:19:28.351 "send_buf_size": 2097152, 00:19:28.351 "enable_recv_pipe": true, 00:19:28.351 "enable_quickack": false, 00:19:28.351 "enable_placement_id": 0, 00:19:28.351 "enable_zerocopy_send_server": true, 00:19:28.351 "enable_zerocopy_send_client": false, 00:19:28.351 "zerocopy_threshold": 0, 00:19:28.351 "tls_version": 0, 00:19:28.351 "enable_ktls": false 00:19:28.351 } 00:19:28.351 } 00:19:28.351 ] 00:19:28.351 }, 00:19:28.351 { 00:19:28.351 "subsystem": "vmd", 00:19:28.351 "config": [] 00:19:28.351 }, 00:19:28.351 { 00:19:28.351 "subsystem": "accel", 00:19:28.351 "config": [ 00:19:28.351 { 00:19:28.351 "method": "accel_set_options", 00:19:28.351 "params": { 00:19:28.351 "small_cache_size": 128, 00:19:28.351 "large_cache_size": 16, 00:19:28.351 "task_count": 2048, 00:19:28.351 "sequence_count": 2048, 00:19:28.351 "buf_count": 2048 00:19:28.351 } 00:19:28.351 } 00:19:28.351 ] 00:19:28.351 }, 00:19:28.351 { 00:19:28.351 "subsystem": "bdev", 00:19:28.351 "config": [ 00:19:28.352 { 00:19:28.352 "method": "bdev_set_options", 00:19:28.352 "params": { 00:19:28.352 "bdev_io_pool_size": 65535, 00:19:28.352 "bdev_io_cache_size": 256, 00:19:28.352 "bdev_auto_examine": true, 00:19:28.352 "iobuf_small_cache_size": 128, 00:19:28.352 "iobuf_large_cache_size": 16 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "bdev_raid_set_options", 00:19:28.352 "params": { 00:19:28.352 "process_window_size_kb": 1024 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "bdev_iscsi_set_options", 00:19:28.352 "params": { 00:19:28.352 "timeout_sec": 30 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "bdev_nvme_set_options", 00:19:28.352 "params": { 00:19:28.352 "action_on_timeout": "none", 00:19:28.352 "timeout_us": 0, 00:19:28.352 "timeout_admin_us": 0, 00:19:28.352 "keep_alive_timeout_ms": 10000, 00:19:28.352 "arbitration_burst": 0, 00:19:28.352 "low_priority_weight": 0, 00:19:28.352 "medium_priority_weight": 0, 00:19:28.352 "high_priority_weight": 0, 00:19:28.352 "nvme_adminq_poll_period_us": 10000, 00:19:28.352 "nvme_ioq_poll_period_us": 0, 00:19:28.352 "io_queue_requests": 0, 00:19:28.352 "delay_cmd_submit": true, 00:19:28.352 "transport_retry_count": 4, 00:19:28.352 "bdev_retry_count": 3, 00:19:28.352 "transport_ack_timeout": 0, 00:19:28.352 "ctrlr_loss_timeout_sec": 0, 00:19:28.352 "reconnect_delay_sec": 0, 00:19:28.352 "fast_io_fail_timeout_sec": 0, 00:19:28.352 "disable_auto_failback": false, 00:19:28.352 "generate_uuids": false, 00:19:28.352 "transport_tos": 0, 00:19:28.352 "nvme_error_stat": false, 00:19:28.352 "rdma_srq_size": 0, 00:19:28.352 "io_path_stat": false, 00:19:28.352 "allow_accel_sequence": false, 00:19:28.352 "rdma_max_cq_size": 0, 00:19:28.352 "rdma_cm_event_timeout_ms": 0, 00:19:28.352 "dhchap_digests": [ 00:19:28.352 "sha256", 00:19:28.352 "sha384", 00:19:28.352 "sha512" 00:19:28.352 ], 00:19:28.352 "dhchap_dhgroups": [ 00:19:28.352 "null", 00:19:28.352 "ffdhe2048", 00:19:28.352 "ffdhe3072", 00:19:28.352 "ffdhe4096", 00:19:28.352 "ffdhe6144", 00:19:28.352 "ffdhe8192" 00:19:28.352 ] 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "bdev_nvme_set_hotplug", 00:19:28.352 "params": { 00:19:28.352 "period_us": 100000, 00:19:28.352 "enable": false 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "bdev_malloc_create", 00:19:28.352 "params": { 00:19:28.352 "name": "malloc0", 00:19:28.352 "num_blocks": 8192, 00:19:28.352 "block_size": 4096, 00:19:28.352 "physical_block_size": 4096, 00:19:28.352 "uuid": "36e0810b-1aa4-46ae-942a-82da8e26bdd1", 00:19:28.352 "optimal_io_boundary": 0 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "bdev_wait_for_examine" 00:19:28.352 } 00:19:28.352 ] 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "subsystem": "nbd", 00:19:28.352 "config": [] 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "subsystem": "scheduler", 00:19:28.352 "config": [ 00:19:28.352 { 00:19:28.352 "method": "framework_set_scheduler", 00:19:28.352 "params": { 00:19:28.352 "name": "static" 00:19:28.352 } 00:19:28.352 } 00:19:28.352 ] 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "subsystem": "nvmf", 00:19:28.352 "config": [ 00:19:28.352 { 00:19:28.352 "method": "nvmf_set_config", 00:19:28.352 "params": { 00:19:28.352 "discovery_filter": "match_any", 00:19:28.352 "admin_cmd_passthru": { 00:19:28.352 "identify_ctrlr": false 00:19:28.352 } 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "nvmf_set_max_subsystems", 00:19:28.352 "params": { 00:19:28.352 "max_subsystems": 1024 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "nvmf_set_crdt", 00:19:28.352 "params": { 00:19:28.352 "crdt1": 0, 00:19:28.352 "crdt2": 0, 00:19:28.352 "crdt3": 0 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "nvmf_create_transport", 00:19:28.352 "params": { 00:19:28.352 "trtype": "TCP", 00:19:28.352 "max_queue_depth": 128, 00:19:28.352 "max_io_qpairs_per_ctrlr": 127, 00:19:28.352 "in_capsule_data_size": 4096, 00:19:28.352 "max_io_size": 131072, 00:19:28.352 "io_unit_size": 131072, 00:19:28.352 "max_aq_depth": 128, 00:19:28.352 "num_shared_buffers": 511, 00:19:28.352 "buf_cache_size": 4294967295, 00:19:28.352 "dif_insert_or_strip": false, 00:19:28.352 "zcopy": false, 00:19:28.352 "c2h_success": false, 00:19:28.352 "sock_priority": 0, 00:19:28.352 "abort_timeout_sec": 1, 00:19:28.352 "ack_timeout": 0, 00:19:28.352 "data_wr_pool_size": 0 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "nvmf_create_subsystem", 00:19:28.352 "params": { 00:19:28.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.352 "allow_any_host": false, 00:19:28.352 "serial_number": "00000000000000000000", 00:19:28.352 "model_number": "SPDK bdev Controller", 00:19:28.352 "max_namespaces": 32, 00:19:28.352 "min_cntlid": 1, 00:19:28.352 "max_cntlid": 65519, 00:19:28.352 "ana_reporting": false 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "nvmf_subsystem_add_host", 00:19:28.352 "params": { 00:19:28.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.352 "host": "nqn.2016-06.io.spdk:host1", 00:19:28.352 "psk": "key0" 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "nvmf_subsystem_add_ns", 00:19:28.352 "params": { 00:19:28.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.352 "namespace": { 00:19:28.352 "nsid": 1, 00:19:28.352 "bdev_name": "malloc0", 00:19:28.352 "nguid": "36E0810B1AA446AE942A82DA8E26BDD1", 00:19:28.352 "uuid": "36e0810b-1aa4-46ae-942a-82da8e26bdd1", 00:19:28.352 "no_auto_visible": false 00:19:28.352 } 00:19:28.352 } 00:19:28.352 }, 00:19:28.352 { 00:19:28.352 "method": "nvmf_subsystem_add_listener", 00:19:28.352 "params": { 00:19:28.352 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.352 "listen_address": { 00:19:28.352 "trtype": "TCP", 00:19:28.353 "adrfam": "IPv4", 00:19:28.353 "traddr": "10.0.0.2", 00:19:28.353 "trsvcid": "4420" 00:19:28.353 }, 00:19:28.353 "secure_channel": false, 00:19:28.353 "sock_impl": "ssl" 00:19:28.353 } 00:19:28.353 } 00:19:28.353 ] 00:19:28.353 } 00:19:28.353 ] 00:19:28.353 }' 00:19:28.353 22:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:28.353 22:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:28.353 "subsystems": [ 00:19:28.353 { 00:19:28.353 "subsystem": "keyring", 00:19:28.353 "config": [ 00:19:28.353 { 00:19:28.353 "method": "keyring_file_add_key", 00:19:28.353 "params": { 00:19:28.353 "name": "key0", 00:19:28.353 "path": "/tmp/tmp.rYhEwGqGHy" 00:19:28.353 } 00:19:28.353 } 00:19:28.353 ] 00:19:28.353 }, 00:19:28.353 { 00:19:28.353 "subsystem": "iobuf", 00:19:28.353 "config": [ 00:19:28.353 { 00:19:28.353 "method": "iobuf_set_options", 00:19:28.353 "params": { 00:19:28.353 "small_pool_count": 8192, 00:19:28.353 "large_pool_count": 1024, 00:19:28.353 "small_bufsize": 8192, 00:19:28.353 "large_bufsize": 135168 00:19:28.353 } 00:19:28.353 } 00:19:28.353 ] 00:19:28.353 }, 00:19:28.353 { 00:19:28.353 "subsystem": "sock", 00:19:28.353 "config": [ 00:19:28.353 { 00:19:28.353 "method": "sock_set_default_impl", 00:19:28.353 "params": { 00:19:28.353 "impl_name": "posix" 00:19:28.353 } 00:19:28.353 }, 00:19:28.353 { 00:19:28.353 "method": "sock_impl_set_options", 00:19:28.353 "params": { 00:19:28.353 "impl_name": "ssl", 00:19:28.353 "recv_buf_size": 4096, 00:19:28.353 "send_buf_size": 4096, 00:19:28.353 "enable_recv_pipe": true, 00:19:28.353 "enable_quickack": false, 00:19:28.353 "enable_placement_id": 0, 00:19:28.353 "enable_zerocopy_send_server": true, 00:19:28.353 "enable_zerocopy_send_client": false, 00:19:28.353 "zerocopy_threshold": 0, 00:19:28.353 "tls_version": 0, 00:19:28.353 "enable_ktls": false 00:19:28.353 } 00:19:28.353 }, 00:19:28.353 { 00:19:28.353 "method": "sock_impl_set_options", 00:19:28.353 "params": { 00:19:28.353 "impl_name": "posix", 00:19:28.353 "recv_buf_size": 2097152, 00:19:28.353 "send_buf_size": 2097152, 00:19:28.353 "enable_recv_pipe": true, 00:19:28.353 "enable_quickack": false, 00:19:28.353 "enable_placement_id": 0, 00:19:28.353 "enable_zerocopy_send_server": true, 00:19:28.353 "enable_zerocopy_send_client": false, 00:19:28.353 "zerocopy_threshold": 0, 00:19:28.353 "tls_version": 0, 00:19:28.353 "enable_ktls": false 00:19:28.353 } 00:19:28.353 } 00:19:28.353 ] 00:19:28.353 }, 00:19:28.353 { 00:19:28.353 "subsystem": "vmd", 00:19:28.353 "config": [] 00:19:28.353 }, 00:19:28.353 { 00:19:28.353 "subsystem": "accel", 00:19:28.353 "config": [ 00:19:28.353 { 00:19:28.353 "method": "accel_set_options", 00:19:28.353 "params": { 00:19:28.353 "small_cache_size": 128, 00:19:28.353 "large_cache_size": 16, 00:19:28.353 "task_count": 2048, 00:19:28.353 "sequence_count": 2048, 00:19:28.353 "buf_count": 2048 00:19:28.353 } 00:19:28.353 } 00:19:28.353 ] 00:19:28.353 }, 00:19:28.353 { 00:19:28.353 "subsystem": "bdev", 00:19:28.353 "config": [ 00:19:28.353 { 00:19:28.353 "method": "bdev_set_options", 00:19:28.353 "params": { 00:19:28.353 "bdev_io_pool_size": 65535, 00:19:28.353 "bdev_io_cache_size": 256, 00:19:28.353 "bdev_auto_examine": true, 00:19:28.353 "iobuf_small_cache_size": 128, 00:19:28.353 "iobuf_large_cache_size": 16 00:19:28.353 } 00:19:28.353 }, 00:19:28.353 { 00:19:28.353 "method": "bdev_raid_set_options", 00:19:28.353 "params": { 00:19:28.353 "process_window_size_kb": 1024 00:19:28.353 } 00:19:28.353 }, 00:19:28.353 { 00:19:28.353 "method": "bdev_iscsi_set_options", 00:19:28.353 "params": { 00:19:28.353 "timeout_sec": 30 00:19:28.353 } 00:19:28.353 }, 00:19:28.353 { 00:19:28.353 "method": "bdev_nvme_set_options", 00:19:28.353 "params": { 00:19:28.353 "action_on_timeout": "none", 00:19:28.353 "timeout_us": 0, 00:19:28.353 "timeout_admin_us": 0, 00:19:28.353 "keep_alive_timeout_ms": 10000, 00:19:28.353 "arbitration_burst": 0, 00:19:28.353 "low_priority_weight": 0, 00:19:28.353 "medium_priority_weight": 0, 00:19:28.353 "high_priority_weight": 0, 00:19:28.353 "nvme_adminq_poll_period_us": 10000, 00:19:28.353 "nvme_ioq_poll_period_us": 0, 00:19:28.353 "io_queue_requests": 512, 00:19:28.353 "delay_cmd_submit": true, 00:19:28.353 "transport_retry_count": 4, 00:19:28.353 "bdev_retry_count": 3, 00:19:28.353 "transport_ack_timeout": 0, 00:19:28.353 "ctrlr_loss_timeout_sec": 0, 00:19:28.353 "reconnect_delay_sec": 0, 00:19:28.353 "fast_io_fail_timeout_sec": 0, 00:19:28.353 "disable_auto_failback": false, 00:19:28.353 "generate_uuids": false, 00:19:28.354 "transport_tos": 0, 00:19:28.354 "nvme_error_stat": false, 00:19:28.354 "rdma_srq_size": 0, 00:19:28.354 "io_path_stat": false, 00:19:28.354 "allow_accel_sequence": false, 00:19:28.354 "rdma_max_cq_size": 0, 00:19:28.354 "rdma_cm_event_timeout_ms": 0, 00:19:28.354 "dhchap_digests": [ 00:19:28.354 "sha256", 00:19:28.354 "sha384", 00:19:28.354 "sha512" 00:19:28.354 ], 00:19:28.354 "dhchap_dhgroups": [ 00:19:28.354 "null", 00:19:28.354 "ffdhe2048", 00:19:28.354 "ffdhe3072", 00:19:28.354 "ffdhe4096", 00:19:28.354 "ffdhe6144", 00:19:28.354 "ffdhe8192" 00:19:28.354 ] 00:19:28.354 } 00:19:28.354 }, 00:19:28.354 { 00:19:28.354 "method": "bdev_nvme_attach_controller", 00:19:28.354 "params": { 00:19:28.354 "name": "nvme0", 00:19:28.354 "trtype": "TCP", 00:19:28.354 "adrfam": "IPv4", 00:19:28.354 "traddr": "10.0.0.2", 00:19:28.354 "trsvcid": "4420", 00:19:28.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.354 "prchk_reftag": false, 00:19:28.354 "prchk_guard": false, 00:19:28.354 "ctrlr_loss_timeout_sec": 0, 00:19:28.354 "reconnect_delay_sec": 0, 00:19:28.354 "fast_io_fail_timeout_sec": 0, 00:19:28.354 "psk": "key0", 00:19:28.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.354 "hdgst": false, 00:19:28.354 "ddgst": false 00:19:28.354 } 00:19:28.354 }, 00:19:28.354 { 00:19:28.354 "method": "bdev_nvme_set_hotplug", 00:19:28.354 "params": { 00:19:28.354 "period_us": 100000, 00:19:28.354 "enable": false 00:19:28.354 } 00:19:28.354 }, 00:19:28.354 { 00:19:28.354 "method": "bdev_enable_histogram", 00:19:28.354 "params": { 00:19:28.354 "name": "nvme0n1", 00:19:28.354 "enable": true 00:19:28.354 } 00:19:28.354 }, 00:19:28.354 { 00:19:28.354 "method": "bdev_wait_for_examine" 00:19:28.354 } 00:19:28.354 ] 00:19:28.354 }, 00:19:28.354 { 00:19:28.354 "subsystem": "nbd", 00:19:28.354 "config": [] 00:19:28.354 } 00:19:28.354 ] 00:19:28.354 }' 00:19:28.354 22:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 38543 00:19:28.354 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 38543 ']' 00:19:28.354 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 38543 00:19:28.354 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:28.354 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.354 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 38543 00:19:28.354 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 38543' 00:19:28.614 killing process with pid 38543 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 38543 00:19:28.614 Received shutdown signal, test time was about 1.000000 seconds 00:19:28.614 00:19:28.614 Latency(us) 00:19:28.614 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.614 =================================================================================================================== 00:19:28.614 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 38543 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 38490 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 38490 ']' 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 38490 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 38490 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 38490' 00:19:28.614 killing process with pid 38490 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 38490 00:19:28.614 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 38490 00:19:28.873 22:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:28.873 22:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.873 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:28.873 22:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:28.873 "subsystems": [ 00:19:28.873 { 00:19:28.873 "subsystem": "keyring", 00:19:28.873 "config": [ 00:19:28.873 { 00:19:28.873 "method": "keyring_file_add_key", 00:19:28.873 "params": { 00:19:28.873 "name": "key0", 00:19:28.873 "path": "/tmp/tmp.rYhEwGqGHy" 00:19:28.873 } 00:19:28.873 } 00:19:28.873 ] 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "subsystem": "iobuf", 00:19:28.873 "config": [ 00:19:28.873 { 00:19:28.873 "method": "iobuf_set_options", 00:19:28.873 "params": { 00:19:28.873 "small_pool_count": 8192, 00:19:28.873 "large_pool_count": 1024, 00:19:28.873 "small_bufsize": 8192, 00:19:28.873 "large_bufsize": 135168 00:19:28.873 } 00:19:28.873 } 00:19:28.873 ] 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "subsystem": "sock", 00:19:28.873 "config": [ 00:19:28.873 { 00:19:28.873 "method": "sock_set_default_impl", 00:19:28.873 "params": { 00:19:28.873 "impl_name": "posix" 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "sock_impl_set_options", 00:19:28.873 "params": { 00:19:28.873 "impl_name": "ssl", 00:19:28.873 "recv_buf_size": 4096, 00:19:28.873 "send_buf_size": 4096, 00:19:28.873 "enable_recv_pipe": true, 00:19:28.873 "enable_quickack": false, 00:19:28.873 "enable_placement_id": 0, 00:19:28.873 "enable_zerocopy_send_server": true, 00:19:28.873 "enable_zerocopy_send_client": false, 00:19:28.873 "zerocopy_threshold": 0, 00:19:28.873 "tls_version": 0, 00:19:28.873 "enable_ktls": false 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "sock_impl_set_options", 00:19:28.873 "params": { 00:19:28.873 "impl_name": "posix", 00:19:28.873 "recv_buf_size": 2097152, 00:19:28.873 "send_buf_size": 2097152, 00:19:28.873 "enable_recv_pipe": true, 00:19:28.873 "enable_quickack": false, 00:19:28.873 "enable_placement_id": 0, 00:19:28.873 "enable_zerocopy_send_server": true, 00:19:28.873 "enable_zerocopy_send_client": false, 00:19:28.873 "zerocopy_threshold": 0, 00:19:28.873 "tls_version": 0, 00:19:28.873 "enable_ktls": false 00:19:28.873 } 00:19:28.873 } 00:19:28.873 ] 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "subsystem": "vmd", 00:19:28.873 "config": [] 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "subsystem": "accel", 00:19:28.873 "config": [ 00:19:28.873 { 00:19:28.873 "method": "accel_set_options", 00:19:28.873 "params": { 00:19:28.873 "small_cache_size": 128, 00:19:28.873 "large_cache_size": 16, 00:19:28.873 "task_count": 2048, 00:19:28.873 "sequence_count": 2048, 00:19:28.873 "buf_count": 2048 00:19:28.873 } 00:19:28.873 } 00:19:28.873 ] 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "subsystem": "bdev", 00:19:28.873 "config": [ 00:19:28.873 { 00:19:28.873 "method": "bdev_set_options", 00:19:28.873 "params": { 00:19:28.873 "bdev_io_pool_size": 65535, 00:19:28.873 "bdev_io_cache_size": 256, 00:19:28.873 "bdev_auto_examine": true, 00:19:28.873 "iobuf_small_cache_size": 128, 00:19:28.873 "iobuf_large_cache_size": 16 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "bdev_raid_set_options", 00:19:28.873 "params": { 00:19:28.873 "process_window_size_kb": 1024 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "bdev_iscsi_set_options", 00:19:28.873 "params": { 00:19:28.873 "timeout_sec": 30 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "bdev_nvme_set_options", 00:19:28.873 "params": { 00:19:28.873 "action_on_timeout": "none", 00:19:28.873 "timeout_us": 0, 00:19:28.873 "timeout_admin_us": 0, 00:19:28.873 "keep_alive_timeout_ms": 10000, 00:19:28.873 "arbitration_burst": 0, 00:19:28.873 "low_priority_weight": 0, 00:19:28.873 "medium_priority_weight": 0, 00:19:28.873 "high_priority_weight": 0, 00:19:28.873 "nvme_adminq_poll_period_us": 10000, 00:19:28.873 "nvme_ioq_poll_period_us": 0, 00:19:28.873 "io_queue_requests": 0, 00:19:28.873 "delay_cmd_submit": true, 00:19:28.873 "transport_retry_count": 4, 00:19:28.873 "bdev_retry_count": 3, 00:19:28.873 "transport_ack_timeout": 0, 00:19:28.873 "ctrlr_loss_timeout_sec": 0, 00:19:28.873 "reconnect_delay_sec": 0, 00:19:28.873 "fast_io_fail_timeout_sec": 0, 00:19:28.873 "disable_auto_failback": false, 00:19:28.873 "generate_uuids": false, 00:19:28.873 "transport_tos": 0, 00:19:28.873 "nvme_error_stat": false, 00:19:28.873 "rdma_srq_size": 0, 00:19:28.873 "io_path_stat": false, 00:19:28.873 "allow_accel_sequence": false, 00:19:28.873 "rdma_max_cq_size": 0, 00:19:28.873 "rdma_cm_event_timeout_ms": 0, 00:19:28.873 "dhchap_digests": [ 00:19:28.873 "sha256", 00:19:28.873 "sha384", 00:19:28.873 "sha512" 00:19:28.873 ], 00:19:28.873 "dhchap_dhgroups": [ 00:19:28.873 "null", 00:19:28.873 "ffdhe2048", 00:19:28.873 "ffdhe3072", 00:19:28.873 "ffdhe4096", 00:19:28.873 "ffdhe6144", 00:19:28.873 "ffdhe8192" 00:19:28.873 ] 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "bdev_nvme_set_hotplug", 00:19:28.873 "params": { 00:19:28.873 "period_us": 100000, 00:19:28.873 "enable": false 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "bdev_malloc_create", 00:19:28.873 "params": { 00:19:28.873 "name": "malloc0", 00:19:28.873 "num_blocks": 8192, 00:19:28.873 "block_size": 4096, 00:19:28.873 "physical_block_size": 4096, 00:19:28.873 "uuid": "36e0810b-1aa4-46ae-942a-82da8e26bdd1", 00:19:28.873 "optimal_io_boundary": 0 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "bdev_wait_for_examine" 00:19:28.873 } 00:19:28.873 ] 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "subsystem": "nbd", 00:19:28.873 "config": [] 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "subsystem": "scheduler", 00:19:28.873 "config": [ 00:19:28.873 { 00:19:28.873 "method": "framework_set_scheduler", 00:19:28.873 "params": { 00:19:28.873 "name": "static" 00:19:28.873 } 00:19:28.873 } 00:19:28.873 ] 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "subsystem": "nvmf", 00:19:28.873 "config": [ 00:19:28.873 { 00:19:28.873 "method": "nvmf_set_config", 00:19:28.873 "params": { 00:19:28.873 "discovery_filter": "match_any", 00:19:28.873 "admin_cmd_passthru": { 00:19:28.873 "identify_ctrlr": false 00:19:28.873 } 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "nvmf_set_max_subsystems", 00:19:28.873 "params": { 00:19:28.873 "max_subsystems": 1024 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "nvmf_set_crdt", 00:19:28.873 "params": { 00:19:28.873 "crdt1": 0, 00:19:28.873 "crdt2": 0, 00:19:28.873 "crdt3": 0 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "nvmf_create_transport", 00:19:28.873 "params": { 00:19:28.873 "trtype": "TCP", 00:19:28.873 "max_queue_depth": 128, 00:19:28.873 "max_io_qpairs_per_ctrlr": 127, 00:19:28.873 "in_capsule_data_size": 4096, 00:19:28.873 "max_io_size": 131072, 00:19:28.873 "io_unit_size": 131072, 00:19:28.873 "max_aq_depth": 128, 00:19:28.873 "num_shared_buffers": 511, 00:19:28.873 "buf_cache_size": 4294967295, 00:19:28.873 "dif_insert_or_strip": false, 00:19:28.873 "zcopy": false, 00:19:28.873 "c2h_success": false, 00:19:28.873 "sock_priority": 0, 00:19:28.873 "abort_timeout_sec": 1, 00:19:28.873 "ack_timeout": 0, 00:19:28.873 "data_wr_pool_size": 0 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "nvmf_create_subsystem", 00:19:28.873 "params": { 00:19:28.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.873 "allow_any_host": false, 00:19:28.873 "serial_number": "00000000000000000000", 00:19:28.873 "model_number": "SPDK bdev Controller", 00:19:28.873 "max_namespaces": 32, 00:19:28.873 "min_cntlid": 1, 00:19:28.873 "max_cntlid": 65519, 00:19:28.873 "ana_reporting": false 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "nvmf_subsystem_add_host", 00:19:28.873 "params": { 00:19:28.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.873 "host": "nqn.2016-06.io.spdk:host1", 00:19:28.873 "psk": "key0" 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "nvmf_subsystem_add_ns", 00:19:28.873 "params": { 00:19:28.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.873 "namespace": { 00:19:28.873 "nsid": 1, 00:19:28.873 "bdev_name": "malloc0", 00:19:28.873 "nguid": "36E0810B1AA446AE942A82DA8E26BDD1", 00:19:28.873 "uuid": "36e0810b-1aa4-46ae-942a-82da8e26bdd1", 00:19:28.873 "no_auto_visible": false 00:19:28.873 } 00:19:28.873 } 00:19:28.873 }, 00:19:28.873 { 00:19:28.873 "method": "nvmf_subsystem_add_listener", 00:19:28.873 "params": { 00:19:28.873 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.874 "listen_address": { 00:19:28.874 "trtype": "TCP", 00:19:28.874 "adrfam": "IPv4", 00:19:28.874 "traddr": "10.0.0.2", 00:19:28.874 "trsvcid": "4420" 00:19:28.874 }, 00:19:28.874 "secure_channel": false, 00:19:28.874 "sock_impl": "ssl" 00:19:28.874 } 00:19:28.874 } 00:19:28.874 ] 00:19:28.874 } 00:19:28.874 ] 00:19:28.874 }' 00:19:28.874 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.874 22:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=39216 00:19:28.874 22:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:28.874 22:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 39216 00:19:28.874 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 39216 ']' 00:19:28.874 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.874 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.874 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.874 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.874 22:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.874 [2024-07-15 22:35:52.798189] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:28.874 [2024-07-15 22:35:52.798242] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.874 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.132 [2024-07-15 22:35:52.854461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.132 [2024-07-15 22:35:52.933074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.132 [2024-07-15 22:35:52.933107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.133 [2024-07-15 22:35:52.933114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.133 [2024-07-15 22:35:52.933119] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.133 [2024-07-15 22:35:52.933125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.133 [2024-07-15 22:35:52.933178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.391 [2024-07-15 22:35:53.144330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.391 [2024-07-15 22:35:53.176369] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.391 [2024-07-15 22:35:53.184558] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.650 22:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.650 22:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:29.650 22:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.650 22:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:29.650 22:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.909 22:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.909 22:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=39248 00:19:29.909 22:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 39248 /var/tmp/bdevperf.sock 00:19:29.909 22:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 39248 ']' 00:19:29.909 22:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.909 22:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:29.909 22:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.909 22:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.909 22:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:29.909 "subsystems": [ 00:19:29.909 { 00:19:29.909 "subsystem": "keyring", 00:19:29.909 "config": [ 00:19:29.909 { 00:19:29.909 "method": "keyring_file_add_key", 00:19:29.909 "params": { 00:19:29.909 "name": "key0", 00:19:29.909 "path": "/tmp/tmp.rYhEwGqGHy" 00:19:29.909 } 00:19:29.909 } 00:19:29.909 ] 00:19:29.909 }, 00:19:29.909 { 00:19:29.909 "subsystem": "iobuf", 00:19:29.909 "config": [ 00:19:29.909 { 00:19:29.909 "method": "iobuf_set_options", 00:19:29.909 "params": { 00:19:29.909 "small_pool_count": 8192, 00:19:29.909 "large_pool_count": 1024, 00:19:29.909 "small_bufsize": 8192, 00:19:29.909 "large_bufsize": 135168 00:19:29.909 } 00:19:29.909 } 00:19:29.909 ] 00:19:29.909 }, 00:19:29.909 { 00:19:29.909 "subsystem": "sock", 00:19:29.909 "config": [ 00:19:29.909 { 00:19:29.909 "method": "sock_set_default_impl", 00:19:29.909 "params": { 00:19:29.909 "impl_name": "posix" 00:19:29.909 } 00:19:29.909 }, 00:19:29.909 { 00:19:29.909 "method": "sock_impl_set_options", 00:19:29.909 "params": { 00:19:29.909 "impl_name": "ssl", 00:19:29.909 "recv_buf_size": 4096, 00:19:29.909 "send_buf_size": 4096, 00:19:29.909 "enable_recv_pipe": true, 00:19:29.909 "enable_quickack": false, 00:19:29.909 "enable_placement_id": 0, 00:19:29.909 "enable_zerocopy_send_server": true, 00:19:29.909 "enable_zerocopy_send_client": false, 00:19:29.909 "zerocopy_threshold": 0, 00:19:29.909 "tls_version": 0, 00:19:29.909 "enable_ktls": false 00:19:29.909 } 00:19:29.909 }, 00:19:29.909 { 00:19:29.909 "method": "sock_impl_set_options", 00:19:29.909 "params": { 00:19:29.909 "impl_name": "posix", 00:19:29.909 "recv_buf_size": 2097152, 00:19:29.909 "send_buf_size": 2097152, 00:19:29.909 "enable_recv_pipe": true, 00:19:29.909 "enable_quickack": false, 00:19:29.909 "enable_placement_id": 0, 00:19:29.909 "enable_zerocopy_send_server": true, 00:19:29.909 "enable_zerocopy_send_client": false, 00:19:29.909 "zerocopy_threshold": 0, 00:19:29.909 "tls_version": 0, 00:19:29.909 "enable_ktls": false 00:19:29.909 } 00:19:29.909 } 00:19:29.909 ] 00:19:29.909 }, 00:19:29.909 { 00:19:29.909 "subsystem": "vmd", 00:19:29.909 "config": [] 00:19:29.909 }, 00:19:29.909 { 00:19:29.909 "subsystem": "accel", 00:19:29.909 "config": [ 00:19:29.909 { 00:19:29.909 "method": "accel_set_options", 00:19:29.909 "params": { 00:19:29.909 "small_cache_size": 128, 00:19:29.909 "large_cache_size": 16, 00:19:29.909 "task_count": 2048, 00:19:29.909 "sequence_count": 2048, 00:19:29.909 "buf_count": 2048 00:19:29.909 } 00:19:29.909 } 00:19:29.909 ] 00:19:29.909 }, 00:19:29.909 { 00:19:29.909 "subsystem": "bdev", 00:19:29.909 "config": [ 00:19:29.909 { 00:19:29.909 "method": "bdev_set_options", 00:19:29.909 "params": { 00:19:29.909 "bdev_io_pool_size": 65535, 00:19:29.909 "bdev_io_cache_size": 256, 00:19:29.909 "bdev_auto_examine": true, 00:19:29.909 "iobuf_small_cache_size": 128, 00:19:29.909 "iobuf_large_cache_size": 16 00:19:29.909 } 00:19:29.909 }, 00:19:29.909 { 00:19:29.909 "method": "bdev_raid_set_options", 00:19:29.909 "params": { 00:19:29.909 "process_window_size_kb": 1024 00:19:29.910 } 00:19:29.910 }, 00:19:29.910 { 00:19:29.910 "method": "bdev_iscsi_set_options", 00:19:29.910 "params": { 00:19:29.910 "timeout_sec": 30 00:19:29.910 } 00:19:29.910 }, 00:19:29.910 { 00:19:29.910 "method": "bdev_nvme_set_options", 00:19:29.910 "params": { 00:19:29.910 "action_on_timeout": "none", 00:19:29.910 "timeout_us": 0, 00:19:29.910 "timeout_admin_us": 0, 00:19:29.910 "keep_alive_timeout_ms": 10000, 00:19:29.910 "arbitration_burst": 0, 00:19:29.910 "low_priority_weight": 0, 00:19:29.910 "medium_priority_weight": 0, 00:19:29.910 "high_priority_weight": 0, 00:19:29.910 "nvme_adminq_poll_period_us": 10000, 00:19:29.910 "nvme_ioq_poll_period_us": 0, 00:19:29.910 "io_queue_requests": 512, 00:19:29.910 "delay_cmd_submit": true, 00:19:29.910 "transport_retry_count": 4, 00:19:29.910 "bdev_retry_count": 3, 00:19:29.910 "transport_ack_timeout": 0, 00:19:29.910 "ctrlr_loss_timeout_sec": 0, 00:19:29.910 "reconnect_delay_sec": 0, 00:19:29.910 "fast_io_fail_timeout_sec": 0, 00:19:29.910 "disable_auto_failback": false, 00:19:29.910 "generate_uuids": false, 00:19:29.910 "transport_tos": 0, 00:19:29.910 "nvme_error_stat": false, 00:19:29.910 "rdma_srq_size": 0, 00:19:29.910 "io_path_stat": false, 00:19:29.910 "allow_accel_sequence": false, 00:19:29.910 "rdma_max_cq_size": 0, 00:19:29.910 "rdma_cm_event_timeout_ms": 0, 00:19:29.910 "dhchap_digests": [ 00:19:29.910 "sha256", 00:19:29.910 "sha384", 00:19:29.910 "sha512" 00:19:29.910 ], 00:19:29.910 "dhchap_dhgroups": [ 00:19:29.910 "null", 00:19:29.910 "ffdhe2048", 00:19:29.910 "ffdhe3072", 00:19:29.910 "ffdhe4096", 00:19:29.910 "ffdhe6144", 00:19:29.910 "ffdhe8192" 00:19:29.910 ] 00:19:29.910 } 00:19:29.910 }, 00:19:29.910 { 00:19:29.910 "method": "bdev_nvme_attach_controller", 00:19:29.910 "params": { 00:19:29.910 "name": "nvme0", 00:19:29.910 "trtype": "TCP", 00:19:29.910 "adrfam": "IPv4", 00:19:29.910 "traddr": "10.0.0.2", 00:19:29.910 "trsvcid": "4420", 00:19:29.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.910 "prchk_reftag": false, 00:19:29.910 "prchk_guard": false, 00:19:29.910 "ctrlr_loss_timeout_sec": 0, 00:19:29.910 "reconnect_delay_sec": 0, 00:19:29.910 "fast_io_fail_timeout_sec": 0, 00:19:29.910 "psk": "key0", 00:19:29.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.910 "hdgst": false, 00:19:29.910 "ddgst": false 00:19:29.910 } 00:19:29.910 }, 00:19:29.910 { 00:19:29.910 "method": "bdev_nvme_set_hotplug", 00:19:29.910 "params": { 00:19:29.910 "period_us": 100000, 00:19:29.910 "enable": false 00:19:29.910 } 00:19:29.910 }, 00:19:29.910 { 00:19:29.910 "method": "bdev_enable_histogram", 00:19:29.910 "params": { 00:19:29.910 "name": "nvme0n1", 00:19:29.910 "enable": true 00:19:29.910 } 00:19:29.910 }, 00:19:29.910 { 00:19:29.910 "method": "bdev_wait_for_examine" 00:19:29.910 } 00:19:29.910 ] 00:19:29.910 }, 00:19:29.910 { 00:19:29.910 "subsystem": "nbd", 00:19:29.910 "config": [] 00:19:29.910 } 00:19:29.910 ] 00:19:29.910 }' 00:19:29.910 22:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.910 22:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.910 [2024-07-15 22:35:53.667430] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:29.910 [2024-07-15 22:35:53.667476] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid39248 ] 00:19:29.910 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.910 [2024-07-15 22:35:53.720129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.910 [2024-07-15 22:35:53.799218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.169 [2024-07-15 22:35:53.949794] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.737 22:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.737 22:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:30.737 22:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:30.737 22:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:30.737 22:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.737 22:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:30.996 Running I/O for 1 seconds... 00:19:31.931 00:19:31.931 Latency(us) 00:19:31.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.931 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:31.931 Verification LBA range: start 0x0 length 0x2000 00:19:31.931 nvme0n1 : 1.02 4346.97 16.98 0.00 0.00 29202.52 5841.25 75223.93 00:19:31.931 =================================================================================================================== 00:19:31.931 Total : 4346.97 16.98 0.00 0.00 29202.52 5841.25 75223.93 00:19:31.931 0 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:31.931 nvmf_trace.0 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 39248 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 39248 ']' 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 39248 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:31.931 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 39248 00:19:32.190 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:32.190 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:32.190 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 39248' 00:19:32.190 killing process with pid 39248 00:19:32.190 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 39248 00:19:32.190 Received shutdown signal, test time was about 1.000000 seconds 00:19:32.190 00:19:32.190 Latency(us) 00:19:32.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.190 =================================================================================================================== 00:19:32.190 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.190 22:35:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 39248 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:32.190 rmmod nvme_tcp 00:19:32.190 rmmod nvme_fabrics 00:19:32.190 rmmod nvme_keyring 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 39216 ']' 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 39216 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 39216 ']' 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 39216 00:19:32.190 22:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 39216 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 39216' 00:19:32.449 killing process with pid 39216 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 39216 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 39216 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.449 22:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.003 22:35:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:35.004 22:35:58 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.kdJW0JtUCI /tmp/tmp.spn1eIeNSl /tmp/tmp.rYhEwGqGHy 00:19:35.004 00:19:35.004 real 1m23.537s 00:19:35.004 user 2m8.494s 00:19:35.004 sys 0m28.423s 00:19:35.004 22:35:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:35.004 22:35:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.004 ************************************ 00:19:35.004 END TEST nvmf_tls 00:19:35.004 ************************************ 00:19:35.004 22:35:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:35.004 22:35:58 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:35.004 22:35:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:35.004 22:35:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.004 22:35:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:35.004 ************************************ 00:19:35.004 START TEST nvmf_fips 00:19:35.004 ************************************ 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:35.004 * Looking for test storage... 00:19:35.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:35.004 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:35.005 Error setting digest 00:19:35.005 00D20720387F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:35.005 00D20720387F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:35.005 22:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:40.279 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:40.279 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:40.279 Found net devices under 0000:86:00.0: cvl_0_0 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:40.279 Found net devices under 0000:86:00.1: cvl_0_1 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:40.279 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.280 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.280 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.280 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.280 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:40.280 22:36:03 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:40.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:19:40.280 00:19:40.280 --- 10.0.0.2 ping statistics --- 00:19:40.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.280 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:19:40.280 00:19:40.280 --- 10.0.0.1 ping statistics --- 00:19:40.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.280 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=43375 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 43375 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 43375 ']' 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.280 22:36:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:40.280 [2024-07-15 22:36:04.205260] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:40.280 [2024-07-15 22:36:04.205313] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.280 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.539 [2024-07-15 22:36:04.262767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.539 [2024-07-15 22:36:04.334624] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.539 [2024-07-15 22:36:04.334666] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.539 [2024-07-15 22:36:04.334673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.539 [2024-07-15 22:36:04.334679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.539 [2024-07-15 22:36:04.334684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.539 [2024-07-15 22:36:04.334703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.105 22:36:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.105 22:36:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:41.105 22:36:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.105 22:36:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.105 22:36:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:41.105 22:36:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.105 22:36:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:41.105 22:36:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:41.106 22:36:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:41.106 22:36:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:41.106 22:36:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:41.106 22:36:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:41.106 22:36:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:41.106 22:36:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:41.364 [2024-07-15 22:36:05.182063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.364 [2024-07-15 22:36:05.198062] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.364 [2024-07-15 22:36:05.198242] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.364 [2024-07-15 22:36:05.226349] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:41.364 malloc0 00:19:41.364 22:36:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.364 22:36:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=43535 00:19:41.364 22:36:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.364 22:36:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 43535 /var/tmp/bdevperf.sock 00:19:41.364 22:36:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 43535 ']' 00:19:41.364 22:36:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.364 22:36:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.364 22:36:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.364 22:36:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.364 22:36:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:41.365 [2024-07-15 22:36:05.306179] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:41.365 [2024-07-15 22:36:05.306237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid43535 ] 00:19:41.365 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.623 [2024-07-15 22:36:05.357950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.623 [2024-07-15 22:36:05.430375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:42.191 22:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:42.191 22:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:42.191 22:36:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:42.450 [2024-07-15 22:36:06.248010] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:42.450 [2024-07-15 22:36:06.248090] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:42.450 TLSTESTn1 00:19:42.450 22:36:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:42.450 Running I/O for 10 seconds... 00:19:54.663 00:19:54.663 Latency(us) 00:19:54.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.663 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:54.663 Verification LBA range: start 0x0 length 0x2000 00:19:54.663 TLSTESTn1 : 10.03 2346.12 9.16 0.00 0.00 54481.45 7351.43 264423.51 00:19:54.663 =================================================================================================================== 00:19:54.663 Total : 2346.12 9.16 0.00 0.00 54481.45 7351.43 264423.51 00:19:54.663 0 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:54.663 nvmf_trace.0 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 43535 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 43535 ']' 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 43535 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 43535 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 43535' 00:19:54.663 killing process with pid 43535 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 43535 00:19:54.663 Received shutdown signal, test time was about 10.000000 seconds 00:19:54.663 00:19:54.663 Latency(us) 00:19:54.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.663 =================================================================================================================== 00:19:54.663 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.663 [2024-07-15 22:36:16.605572] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 43535 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:54.663 rmmod nvme_tcp 00:19:54.663 rmmod nvme_fabrics 00:19:54.663 rmmod nvme_keyring 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 43375 ']' 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 43375 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 43375 ']' 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 43375 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 43375 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 43375' 00:19:54.663 killing process with pid 43375 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 43375 00:19:54.663 [2024-07-15 22:36:16.897843] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:54.663 22:36:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 43375 00:19:54.663 22:36:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:54.663 22:36:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:54.663 22:36:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:54.663 22:36:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.663 22:36:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.663 22:36:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.663 22:36:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.663 22:36:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.230 22:36:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:55.230 22:36:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:55.230 00:19:55.230 real 0m20.627s 00:19:55.230 user 0m22.091s 00:19:55.230 sys 0m9.297s 00:19:55.230 22:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:55.231 22:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:55.231 ************************************ 00:19:55.231 END TEST nvmf_fips 00:19:55.231 ************************************ 00:19:55.231 22:36:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:55.231 22:36:19 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:19:55.231 22:36:19 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:19:55.231 22:36:19 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:19:55.231 22:36:19 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:19:55.231 22:36:19 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:19:55.231 22:36:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:01.804 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.804 22:36:24 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:01.805 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:01.805 Found net devices under 0000:86:00.0: cvl_0_0 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:01.805 Found net devices under 0000:86:00.1: cvl_0_1 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:01.805 22:36:24 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:01.805 22:36:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:01.805 22:36:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:01.805 22:36:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:01.805 ************************************ 00:20:01.805 START TEST nvmf_perf_adq 00:20:01.805 ************************************ 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:01.805 * Looking for test storage... 00:20:01.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:01.805 22:36:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:05.994 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:05.994 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:05.994 Found net devices under 0000:86:00.0: cvl_0_0 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:05.994 Found net devices under 0000:86:00.1: cvl_0_1 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:05.994 22:36:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:06.963 22:36:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:08.893 22:36:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:14.164 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:14.164 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.164 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:14.165 Found net devices under 0000:86:00.0: cvl_0_0 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:14.165 Found net devices under 0000:86:00.1: cvl_0_1 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.165 22:36:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:20:14.165 00:20:14.165 --- 10.0.0.2 ping statistics --- 00:20:14.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.165 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:20:14.165 00:20:14.165 --- 10.0.0.1 ping statistics --- 00:20:14.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.165 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.165 22:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.425 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=53708 00:20:14.425 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 53708 00:20:14.426 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:14.426 22:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 53708 ']' 00:20:14.426 22:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.426 22:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.426 22:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.426 22:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.426 22:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.426 [2024-07-15 22:36:38.181784] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:20:14.426 [2024-07-15 22:36:38.181826] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.426 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.426 [2024-07-15 22:36:38.238357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.426 [2024-07-15 22:36:38.318673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.426 [2024-07-15 22:36:38.318713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.426 [2024-07-15 22:36:38.318720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.426 [2024-07-15 22:36:38.318727] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.426 [2024-07-15 22:36:38.318732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.426 [2024-07-15 22:36:38.318776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.426 [2024-07-15 22:36:38.318873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.426 [2024-07-15 22:36:38.318953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.426 [2024-07-15 22:36:38.318954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.362 22:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.362 22:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:15.362 22:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:15.362 22:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:15.362 22:36:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.362 [2024-07-15 22:36:39.180185] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.362 Malloc1 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.362 [2024-07-15 22:36:39.228131] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=53948 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:15.362 22:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:15.362 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.274 22:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:17.274 22:36:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.274 22:36:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.533 22:36:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.533 22:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:17.533 "tick_rate": 2300000000, 00:20:17.533 "poll_groups": [ 00:20:17.533 { 00:20:17.533 "name": "nvmf_tgt_poll_group_000", 00:20:17.533 "admin_qpairs": 1, 00:20:17.533 "io_qpairs": 1, 00:20:17.533 "current_admin_qpairs": 1, 00:20:17.533 "current_io_qpairs": 1, 00:20:17.533 "pending_bdev_io": 0, 00:20:17.533 "completed_nvme_io": 19552, 00:20:17.533 "transports": [ 00:20:17.533 { 00:20:17.533 "trtype": "TCP" 00:20:17.533 } 00:20:17.533 ] 00:20:17.533 }, 00:20:17.533 { 00:20:17.533 "name": "nvmf_tgt_poll_group_001", 00:20:17.533 "admin_qpairs": 0, 00:20:17.533 "io_qpairs": 1, 00:20:17.533 "current_admin_qpairs": 0, 00:20:17.533 "current_io_qpairs": 1, 00:20:17.533 "pending_bdev_io": 0, 00:20:17.533 "completed_nvme_io": 20189, 00:20:17.533 "transports": [ 00:20:17.533 { 00:20:17.533 "trtype": "TCP" 00:20:17.533 } 00:20:17.533 ] 00:20:17.533 }, 00:20:17.533 { 00:20:17.533 "name": "nvmf_tgt_poll_group_002", 00:20:17.533 "admin_qpairs": 0, 00:20:17.533 "io_qpairs": 1, 00:20:17.533 "current_admin_qpairs": 0, 00:20:17.533 "current_io_qpairs": 1, 00:20:17.533 "pending_bdev_io": 0, 00:20:17.533 "completed_nvme_io": 19879, 00:20:17.533 "transports": [ 00:20:17.533 { 00:20:17.533 "trtype": "TCP" 00:20:17.533 } 00:20:17.533 ] 00:20:17.533 }, 00:20:17.533 { 00:20:17.533 "name": "nvmf_tgt_poll_group_003", 00:20:17.533 "admin_qpairs": 0, 00:20:17.533 "io_qpairs": 1, 00:20:17.533 "current_admin_qpairs": 0, 00:20:17.533 "current_io_qpairs": 1, 00:20:17.533 "pending_bdev_io": 0, 00:20:17.533 "completed_nvme_io": 19591, 00:20:17.533 "transports": [ 00:20:17.533 { 00:20:17.533 "trtype": "TCP" 00:20:17.533 } 00:20:17.533 ] 00:20:17.533 } 00:20:17.533 ] 00:20:17.533 }' 00:20:17.533 22:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:17.533 22:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:17.533 22:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:17.533 22:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:17.533 22:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 53948 00:20:25.671 Initializing NVMe Controllers 00:20:25.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:25.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:25.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:25.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:25.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:25.672 Initialization complete. Launching workers. 00:20:25.672 ======================================================== 00:20:25.672 Latency(us) 00:20:25.672 Device Information : IOPS MiB/s Average min max 00:20:25.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10277.60 40.15 6227.60 2209.86 10794.01 00:20:25.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10649.00 41.60 6011.63 2160.54 10486.41 00:20:25.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10560.30 41.25 6062.10 1891.50 10603.23 00:20:25.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10426.90 40.73 6139.16 1890.45 10351.57 00:20:25.672 ======================================================== 00:20:25.672 Total : 41913.79 163.73 6109.03 1890.45 10794.01 00:20:25.672 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.672 rmmod nvme_tcp 00:20:25.672 rmmod nvme_fabrics 00:20:25.672 rmmod nvme_keyring 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 53708 ']' 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 53708 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 53708 ']' 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 53708 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 53708 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53708' 00:20:25.672 killing process with pid 53708 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 53708 00:20:25.672 22:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 53708 00:20:25.931 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:25.931 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:25.931 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:25.931 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.931 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.931 22:36:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.931 22:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.931 22:36:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.836 22:36:51 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:27.836 22:36:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:27.836 22:36:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:29.213 22:36:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:31.114 22:36:54 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:36.463 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:36.463 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:36.463 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:36.464 Found net devices under 0000:86:00.0: cvl_0_0 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:36.464 Found net devices under 0000:86:00.1: cvl_0_1 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.464 22:36:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:36.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:20:36.464 00:20:36.464 --- 10.0.0.2 ping statistics --- 00:20:36.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.464 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:20:36.464 00:20:36.464 --- 10.0.0.1 ping statistics --- 00:20:36.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.464 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:36.464 net.core.busy_poll = 1 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:36.464 net.core.busy_read = 1 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=57644 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 57644 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 57644 ']' 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.464 22:37:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:36.464 [2024-07-15 22:37:00.333216] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:20:36.464 [2024-07-15 22:37:00.333282] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.464 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.464 [2024-07-15 22:37:00.394822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.723 [2024-07-15 22:37:00.474469] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.723 [2024-07-15 22:37:00.474505] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.723 [2024-07-15 22:37:00.474512] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.724 [2024-07-15 22:37:00.474519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.724 [2024-07-15 22:37:00.474524] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.724 [2024-07-15 22:37:00.474577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.724 [2024-07-15 22:37:00.474801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.724 [2024-07-15 22:37:00.474820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.724 [2024-07-15 22:37:00.474821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.292 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 [2024-07-15 22:37:01.296827] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 Malloc1 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 [2024-07-15 22:37:01.340358] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=57784 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:37.551 22:37:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:37.551 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.455 22:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:39.455 22:37:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.455 22:37:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:39.455 22:37:03 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.455 22:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:39.455 "tick_rate": 2300000000, 00:20:39.455 "poll_groups": [ 00:20:39.455 { 00:20:39.455 "name": "nvmf_tgt_poll_group_000", 00:20:39.455 "admin_qpairs": 1, 00:20:39.455 "io_qpairs": 3, 00:20:39.455 "current_admin_qpairs": 1, 00:20:39.455 "current_io_qpairs": 3, 00:20:39.455 "pending_bdev_io": 0, 00:20:39.455 "completed_nvme_io": 29933, 00:20:39.455 "transports": [ 00:20:39.455 { 00:20:39.455 "trtype": "TCP" 00:20:39.455 } 00:20:39.455 ] 00:20:39.455 }, 00:20:39.455 { 00:20:39.455 "name": "nvmf_tgt_poll_group_001", 00:20:39.455 "admin_qpairs": 0, 00:20:39.455 "io_qpairs": 1, 00:20:39.455 "current_admin_qpairs": 0, 00:20:39.455 "current_io_qpairs": 1, 00:20:39.455 "pending_bdev_io": 0, 00:20:39.455 "completed_nvme_io": 27673, 00:20:39.455 "transports": [ 00:20:39.455 { 00:20:39.455 "trtype": "TCP" 00:20:39.455 } 00:20:39.455 ] 00:20:39.455 }, 00:20:39.455 { 00:20:39.455 "name": "nvmf_tgt_poll_group_002", 00:20:39.455 "admin_qpairs": 0, 00:20:39.455 "io_qpairs": 0, 00:20:39.455 "current_admin_qpairs": 0, 00:20:39.455 "current_io_qpairs": 0, 00:20:39.455 "pending_bdev_io": 0, 00:20:39.455 "completed_nvme_io": 0, 00:20:39.455 "transports": [ 00:20:39.455 { 00:20:39.455 "trtype": "TCP" 00:20:39.455 } 00:20:39.455 ] 00:20:39.455 }, 00:20:39.455 { 00:20:39.455 "name": "nvmf_tgt_poll_group_003", 00:20:39.455 "admin_qpairs": 0, 00:20:39.455 "io_qpairs": 0, 00:20:39.455 "current_admin_qpairs": 0, 00:20:39.455 "current_io_qpairs": 0, 00:20:39.455 "pending_bdev_io": 0, 00:20:39.455 "completed_nvme_io": 0, 00:20:39.455 "transports": [ 00:20:39.455 { 00:20:39.455 "trtype": "TCP" 00:20:39.455 } 00:20:39.455 ] 00:20:39.455 } 00:20:39.455 ] 00:20:39.455 }' 00:20:39.455 22:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:39.455 22:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:39.455 22:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:39.455 22:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:39.455 22:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 57784 00:20:47.567 Initializing NVMe Controllers 00:20:47.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:47.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:47.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:47.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:47.567 Initialization complete. Launching workers. 00:20:47.567 ======================================================== 00:20:47.567 Latency(us) 00:20:47.567 Device Information : IOPS MiB/s Average min max 00:20:47.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5017.60 19.60 12762.00 1520.37 59448.82 00:20:47.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5212.20 20.36 12285.69 1756.02 59894.17 00:20:47.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5616.60 21.94 11434.36 1872.33 58168.55 00:20:47.567 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14608.80 57.07 4381.76 1421.07 7260.14 00:20:47.567 ======================================================== 00:20:47.567 Total : 30455.20 118.97 8415.79 1421.07 59894.17 00:20:47.567 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:47.825 rmmod nvme_tcp 00:20:47.825 rmmod nvme_fabrics 00:20:47.825 rmmod nvme_keyring 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 57644 ']' 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 57644 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 57644 ']' 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 57644 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 57644 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57644' 00:20:47.825 killing process with pid 57644 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 57644 00:20:47.825 22:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 57644 00:20:48.084 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:48.084 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:48.084 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:48.084 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:48.084 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:48.084 22:37:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.084 22:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.084 22:37:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:49.990 22:37:13 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:49.990 22:37:13 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:49.990 00:20:49.990 real 0m49.392s 00:20:49.990 user 2m49.253s 00:20:49.990 sys 0m9.284s 00:20:49.990 22:37:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:49.990 22:37:13 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.990 ************************************ 00:20:49.990 END TEST nvmf_perf_adq 00:20:49.990 ************************************ 00:20:50.248 22:37:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:50.248 22:37:13 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:50.248 22:37:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:50.248 22:37:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.248 22:37:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:50.248 ************************************ 00:20:50.248 START TEST nvmf_shutdown 00:20:50.248 ************************************ 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:50.248 * Looking for test storage... 00:20:50.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.248 22:37:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:50.249 ************************************ 00:20:50.249 START TEST nvmf_shutdown_tc1 00:20:50.249 ************************************ 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:50.249 22:37:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:55.525 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:55.525 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:55.525 Found net devices under 0000:86:00.0: cvl_0_0 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:55.525 Found net devices under 0000:86:00.1: cvl_0_1 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:55.525 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:55.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:20:55.526 00:20:55.526 --- 10.0.0.2 ping statistics --- 00:20:55.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.526 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:20:55.526 00:20:55.526 --- 10.0.0.1 ping statistics --- 00:20:55.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.526 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=62994 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 62994 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 62994 ']' 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.526 22:37:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:55.526 [2024-07-15 22:37:19.394621] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:20:55.526 [2024-07-15 22:37:19.394665] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.526 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.526 [2024-07-15 22:37:19.450914] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.785 [2024-07-15 22:37:19.531285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.785 [2024-07-15 22:37:19.531322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.785 [2024-07-15 22:37:19.531329] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.785 [2024-07-15 22:37:19.531335] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.785 [2024-07-15 22:37:19.531340] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.785 [2024-07-15 22:37:19.531438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.785 [2024-07-15 22:37:19.531523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.785 [2024-07-15 22:37:19.531636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.785 [2024-07-15 22:37:19.531637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:56.353 [2024-07-15 22:37:20.250335] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.353 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:56.612 Malloc1 00:20:56.612 [2024-07-15 22:37:20.346539] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.612 Malloc2 00:20:56.612 Malloc3 00:20:56.612 Malloc4 00:20:56.612 Malloc5 00:20:56.612 Malloc6 00:20:56.612 Malloc7 00:20:56.874 Malloc8 00:20:56.874 Malloc9 00:20:56.874 Malloc10 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=63280 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 63280 /var/tmp/bdevperf.sock 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 63280 ']' 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.874 { 00:20:56.874 "params": { 00:20:56.874 "name": "Nvme$subsystem", 00:20:56.874 "trtype": "$TEST_TRANSPORT", 00:20:56.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.874 "adrfam": "ipv4", 00:20:56.874 "trsvcid": "$NVMF_PORT", 00:20:56.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.874 "hdgst": ${hdgst:-false}, 00:20:56.874 "ddgst": ${ddgst:-false} 00:20:56.874 }, 00:20:56.874 "method": "bdev_nvme_attach_controller" 00:20:56.874 } 00:20:56.874 EOF 00:20:56.874 )") 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.874 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.874 { 00:20:56.874 "params": { 00:20:56.874 "name": "Nvme$subsystem", 00:20:56.874 "trtype": "$TEST_TRANSPORT", 00:20:56.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.874 "adrfam": "ipv4", 00:20:56.874 "trsvcid": "$NVMF_PORT", 00:20:56.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.875 "hdgst": ${hdgst:-false}, 00:20:56.875 "ddgst": ${ddgst:-false} 00:20:56.875 }, 00:20:56.875 "method": "bdev_nvme_attach_controller" 00:20:56.875 } 00:20:56.875 EOF 00:20:56.875 )") 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.875 { 00:20:56.875 "params": { 00:20:56.875 "name": "Nvme$subsystem", 00:20:56.875 "trtype": "$TEST_TRANSPORT", 00:20:56.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.875 "adrfam": "ipv4", 00:20:56.875 "trsvcid": "$NVMF_PORT", 00:20:56.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.875 "hdgst": ${hdgst:-false}, 00:20:56.875 "ddgst": ${ddgst:-false} 00:20:56.875 }, 00:20:56.875 "method": "bdev_nvme_attach_controller" 00:20:56.875 } 00:20:56.875 EOF 00:20:56.875 )") 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.875 { 00:20:56.875 "params": { 00:20:56.875 "name": "Nvme$subsystem", 00:20:56.875 "trtype": "$TEST_TRANSPORT", 00:20:56.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.875 "adrfam": "ipv4", 00:20:56.875 "trsvcid": "$NVMF_PORT", 00:20:56.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.875 "hdgst": ${hdgst:-false}, 00:20:56.875 "ddgst": ${ddgst:-false} 00:20:56.875 }, 00:20:56.875 "method": "bdev_nvme_attach_controller" 00:20:56.875 } 00:20:56.875 EOF 00:20:56.875 )") 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.875 { 00:20:56.875 "params": { 00:20:56.875 "name": "Nvme$subsystem", 00:20:56.875 "trtype": "$TEST_TRANSPORT", 00:20:56.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.875 "adrfam": "ipv4", 00:20:56.875 "trsvcid": "$NVMF_PORT", 00:20:56.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.875 "hdgst": ${hdgst:-false}, 00:20:56.875 "ddgst": ${ddgst:-false} 00:20:56.875 }, 00:20:56.875 "method": "bdev_nvme_attach_controller" 00:20:56.875 } 00:20:56.875 EOF 00:20:56.875 )") 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.875 { 00:20:56.875 "params": { 00:20:56.875 "name": "Nvme$subsystem", 00:20:56.875 "trtype": "$TEST_TRANSPORT", 00:20:56.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.875 "adrfam": "ipv4", 00:20:56.875 "trsvcid": "$NVMF_PORT", 00:20:56.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.875 "hdgst": ${hdgst:-false}, 00:20:56.875 "ddgst": ${ddgst:-false} 00:20:56.875 }, 00:20:56.875 "method": "bdev_nvme_attach_controller" 00:20:56.875 } 00:20:56.875 EOF 00:20:56.875 )") 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.875 { 00:20:56.875 "params": { 00:20:56.875 "name": "Nvme$subsystem", 00:20:56.875 "trtype": "$TEST_TRANSPORT", 00:20:56.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.875 "adrfam": "ipv4", 00:20:56.875 "trsvcid": "$NVMF_PORT", 00:20:56.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.875 "hdgst": ${hdgst:-false}, 00:20:56.875 "ddgst": ${ddgst:-false} 00:20:56.875 }, 00:20:56.875 "method": "bdev_nvme_attach_controller" 00:20:56.875 } 00:20:56.875 EOF 00:20:56.875 )") 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:56.875 [2024-07-15 22:37:20.816675] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:20:56.875 [2024-07-15 22:37:20.816723] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.875 { 00:20:56.875 "params": { 00:20:56.875 "name": "Nvme$subsystem", 00:20:56.875 "trtype": "$TEST_TRANSPORT", 00:20:56.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.875 "adrfam": "ipv4", 00:20:56.875 "trsvcid": "$NVMF_PORT", 00:20:56.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.875 "hdgst": ${hdgst:-false}, 00:20:56.875 "ddgst": ${ddgst:-false} 00:20:56.875 }, 00:20:56.875 "method": "bdev_nvme_attach_controller" 00:20:56.875 } 00:20:56.875 EOF 00:20:56.875 )") 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.875 { 00:20:56.875 "params": { 00:20:56.875 "name": "Nvme$subsystem", 00:20:56.875 "trtype": "$TEST_TRANSPORT", 00:20:56.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.875 "adrfam": "ipv4", 00:20:56.875 "trsvcid": "$NVMF_PORT", 00:20:56.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.875 "hdgst": ${hdgst:-false}, 00:20:56.875 "ddgst": ${ddgst:-false} 00:20:56.875 }, 00:20:56.875 "method": "bdev_nvme_attach_controller" 00:20:56.875 } 00:20:56.875 EOF 00:20:56.875 )") 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.875 { 00:20:56.875 "params": { 00:20:56.875 "name": "Nvme$subsystem", 00:20:56.875 "trtype": "$TEST_TRANSPORT", 00:20:56.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.875 "adrfam": "ipv4", 00:20:56.875 "trsvcid": "$NVMF_PORT", 00:20:56.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.875 "hdgst": ${hdgst:-false}, 00:20:56.875 "ddgst": ${ddgst:-false} 00:20:56.875 }, 00:20:56.875 "method": "bdev_nvme_attach_controller" 00:20:56.875 } 00:20:56.875 EOF 00:20:56.875 )") 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:56.875 EAL: No free 2048 kB hugepages reported on node 1 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:56.875 22:37:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:56.875 "params": { 00:20:56.875 "name": "Nvme1", 00:20:56.875 "trtype": "tcp", 00:20:56.875 "traddr": "10.0.0.2", 00:20:56.875 "adrfam": "ipv4", 00:20:56.875 "trsvcid": "4420", 00:20:56.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.875 "hdgst": false, 00:20:56.875 "ddgst": false 00:20:56.875 }, 00:20:56.875 "method": "bdev_nvme_attach_controller" 00:20:56.875 },{ 00:20:56.875 "params": { 00:20:56.875 "name": "Nvme2", 00:20:56.875 "trtype": "tcp", 00:20:56.875 "traddr": "10.0.0.2", 00:20:56.875 "adrfam": "ipv4", 00:20:56.875 "trsvcid": "4420", 00:20:56.875 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:56.875 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:56.875 "hdgst": false, 00:20:56.875 "ddgst": false 00:20:56.875 }, 00:20:56.876 "method": "bdev_nvme_attach_controller" 00:20:56.876 },{ 00:20:56.876 "params": { 00:20:56.876 "name": "Nvme3", 00:20:56.876 "trtype": "tcp", 00:20:56.876 "traddr": "10.0.0.2", 00:20:56.876 "adrfam": "ipv4", 00:20:56.876 "trsvcid": "4420", 00:20:56.876 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:56.876 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:56.876 "hdgst": false, 00:20:56.876 "ddgst": false 00:20:56.876 }, 00:20:56.876 "method": "bdev_nvme_attach_controller" 00:20:56.876 },{ 00:20:56.876 "params": { 00:20:56.876 "name": "Nvme4", 00:20:56.876 "trtype": "tcp", 00:20:56.876 "traddr": "10.0.0.2", 00:20:56.876 "adrfam": "ipv4", 00:20:56.876 "trsvcid": "4420", 00:20:56.876 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:56.876 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:56.876 "hdgst": false, 00:20:56.876 "ddgst": false 00:20:56.876 }, 00:20:56.876 "method": "bdev_nvme_attach_controller" 00:20:56.876 },{ 00:20:56.876 "params": { 00:20:56.876 "name": "Nvme5", 00:20:56.876 "trtype": "tcp", 00:20:56.876 "traddr": "10.0.0.2", 00:20:56.876 "adrfam": "ipv4", 00:20:56.876 "trsvcid": "4420", 00:20:56.876 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:56.876 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:56.876 "hdgst": false, 00:20:56.876 "ddgst": false 00:20:56.876 }, 00:20:56.876 "method": "bdev_nvme_attach_controller" 00:20:56.876 },{ 00:20:56.876 "params": { 00:20:56.876 "name": "Nvme6", 00:20:56.876 "trtype": "tcp", 00:20:56.876 "traddr": "10.0.0.2", 00:20:56.876 "adrfam": "ipv4", 00:20:56.876 "trsvcid": "4420", 00:20:56.876 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:56.876 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:56.876 "hdgst": false, 00:20:56.876 "ddgst": false 00:20:56.876 }, 00:20:56.876 "method": "bdev_nvme_attach_controller" 00:20:56.876 },{ 00:20:56.876 "params": { 00:20:56.876 "name": "Nvme7", 00:20:56.876 "trtype": "tcp", 00:20:56.876 "traddr": "10.0.0.2", 00:20:56.876 "adrfam": "ipv4", 00:20:56.876 "trsvcid": "4420", 00:20:56.876 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:56.876 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:56.876 "hdgst": false, 00:20:56.876 "ddgst": false 00:20:56.876 }, 00:20:56.876 "method": "bdev_nvme_attach_controller" 00:20:56.876 },{ 00:20:56.876 "params": { 00:20:56.876 "name": "Nvme8", 00:20:56.876 "trtype": "tcp", 00:20:56.876 "traddr": "10.0.0.2", 00:20:56.876 "adrfam": "ipv4", 00:20:56.876 "trsvcid": "4420", 00:20:56.876 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:56.876 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:56.876 "hdgst": false, 00:20:56.876 "ddgst": false 00:20:56.876 }, 00:20:56.876 "method": "bdev_nvme_attach_controller" 00:20:56.876 },{ 00:20:56.876 "params": { 00:20:56.876 "name": "Nvme9", 00:20:56.876 "trtype": "tcp", 00:20:56.876 "traddr": "10.0.0.2", 00:20:56.876 "adrfam": "ipv4", 00:20:56.876 "trsvcid": "4420", 00:20:56.876 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:56.876 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:56.876 "hdgst": false, 00:20:56.876 "ddgst": false 00:20:56.876 }, 00:20:56.876 "method": "bdev_nvme_attach_controller" 00:20:56.876 },{ 00:20:56.876 "params": { 00:20:56.876 "name": "Nvme10", 00:20:56.876 "trtype": "tcp", 00:20:56.876 "traddr": "10.0.0.2", 00:20:56.876 "adrfam": "ipv4", 00:20:56.876 "trsvcid": "4420", 00:20:56.876 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:56.876 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:56.876 "hdgst": false, 00:20:56.876 "ddgst": false 00:20:56.876 }, 00:20:56.876 "method": "bdev_nvme_attach_controller" 00:20:56.876 }' 00:20:57.137 [2024-07-15 22:37:20.871967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.137 [2024-07-15 22:37:20.945332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.554 22:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:58.554 22:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:58.554 22:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:58.554 22:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.554 22:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:58.554 22:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.554 22:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 63280 00:20:58.554 22:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:58.554 22:37:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:59.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 63280 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 62994 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:59.492 { 00:20:59.492 "params": { 00:20:59.492 "name": "Nvme$subsystem", 00:20:59.492 "trtype": "$TEST_TRANSPORT", 00:20:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.492 "adrfam": "ipv4", 00:20:59.492 "trsvcid": "$NVMF_PORT", 00:20:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.492 "hdgst": ${hdgst:-false}, 00:20:59.492 "ddgst": ${ddgst:-false} 00:20:59.492 }, 00:20:59.492 "method": "bdev_nvme_attach_controller" 00:20:59.492 } 00:20:59.492 EOF 00:20:59.492 )") 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:59.492 { 00:20:59.492 "params": { 00:20:59.492 "name": "Nvme$subsystem", 00:20:59.492 "trtype": "$TEST_TRANSPORT", 00:20:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.492 "adrfam": "ipv4", 00:20:59.492 "trsvcid": "$NVMF_PORT", 00:20:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.492 "hdgst": ${hdgst:-false}, 00:20:59.492 "ddgst": ${ddgst:-false} 00:20:59.492 }, 00:20:59.492 "method": "bdev_nvme_attach_controller" 00:20:59.492 } 00:20:59.492 EOF 00:20:59.492 )") 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:59.492 { 00:20:59.492 "params": { 00:20:59.492 "name": "Nvme$subsystem", 00:20:59.492 "trtype": "$TEST_TRANSPORT", 00:20:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.492 "adrfam": "ipv4", 00:20:59.492 "trsvcid": "$NVMF_PORT", 00:20:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.492 "hdgst": ${hdgst:-false}, 00:20:59.492 "ddgst": ${ddgst:-false} 00:20:59.492 }, 00:20:59.492 "method": "bdev_nvme_attach_controller" 00:20:59.492 } 00:20:59.492 EOF 00:20:59.492 )") 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:59.492 { 00:20:59.492 "params": { 00:20:59.492 "name": "Nvme$subsystem", 00:20:59.492 "trtype": "$TEST_TRANSPORT", 00:20:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.492 "adrfam": "ipv4", 00:20:59.492 "trsvcid": "$NVMF_PORT", 00:20:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.492 "hdgst": ${hdgst:-false}, 00:20:59.492 "ddgst": ${ddgst:-false} 00:20:59.492 }, 00:20:59.492 "method": "bdev_nvme_attach_controller" 00:20:59.492 } 00:20:59.492 EOF 00:20:59.492 )") 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:59.492 { 00:20:59.492 "params": { 00:20:59.492 "name": "Nvme$subsystem", 00:20:59.492 "trtype": "$TEST_TRANSPORT", 00:20:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.492 "adrfam": "ipv4", 00:20:59.492 "trsvcid": "$NVMF_PORT", 00:20:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.492 "hdgst": ${hdgst:-false}, 00:20:59.492 "ddgst": ${ddgst:-false} 00:20:59.492 }, 00:20:59.492 "method": "bdev_nvme_attach_controller" 00:20:59.492 } 00:20:59.492 EOF 00:20:59.492 )") 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:59.492 { 00:20:59.492 "params": { 00:20:59.492 "name": "Nvme$subsystem", 00:20:59.492 "trtype": "$TEST_TRANSPORT", 00:20:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.492 "adrfam": "ipv4", 00:20:59.492 "trsvcid": "$NVMF_PORT", 00:20:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.492 "hdgst": ${hdgst:-false}, 00:20:59.492 "ddgst": ${ddgst:-false} 00:20:59.492 }, 00:20:59.492 "method": "bdev_nvme_attach_controller" 00:20:59.492 } 00:20:59.492 EOF 00:20:59.492 )") 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:59.492 { 00:20:59.492 "params": { 00:20:59.492 "name": "Nvme$subsystem", 00:20:59.492 "trtype": "$TEST_TRANSPORT", 00:20:59.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.492 "adrfam": "ipv4", 00:20:59.492 "trsvcid": "$NVMF_PORT", 00:20:59.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.492 "hdgst": ${hdgst:-false}, 00:20:59.492 "ddgst": ${ddgst:-false} 00:20:59.492 }, 00:20:59.492 "method": "bdev_nvme_attach_controller" 00:20:59.492 } 00:20:59.492 EOF 00:20:59.492 )") 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:59.492 [2024-07-15 22:37:23.233349] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:20:59.492 [2024-07-15 22:37:23.233396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63749 ] 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:59.492 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:59.492 { 00:20:59.492 "params": { 00:20:59.492 "name": "Nvme$subsystem", 00:20:59.492 "trtype": "$TEST_TRANSPORT", 00:20:59.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "$NVMF_PORT", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.493 "hdgst": ${hdgst:-false}, 00:20:59.493 "ddgst": ${ddgst:-false} 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 } 00:20:59.493 EOF 00:20:59.493 )") 00:20:59.493 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:59.493 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:59.493 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:59.493 { 00:20:59.493 "params": { 00:20:59.493 "name": "Nvme$subsystem", 00:20:59.493 "trtype": "$TEST_TRANSPORT", 00:20:59.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "$NVMF_PORT", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.493 "hdgst": ${hdgst:-false}, 00:20:59.493 "ddgst": ${ddgst:-false} 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 } 00:20:59.493 EOF 00:20:59.493 )") 00:20:59.493 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:59.493 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:59.493 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:59.493 { 00:20:59.493 "params": { 00:20:59.493 "name": "Nvme$subsystem", 00:20:59.493 "trtype": "$TEST_TRANSPORT", 00:20:59.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "$NVMF_PORT", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:59.493 "hdgst": ${hdgst:-false}, 00:20:59.493 "ddgst": ${ddgst:-false} 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 } 00:20:59.493 EOF 00:20:59.493 )") 00:20:59.493 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:59.493 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:59.493 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.493 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:59.493 22:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:59.493 "params": { 00:20:59.493 "name": "Nvme1", 00:20:59.493 "trtype": "tcp", 00:20:59.493 "traddr": "10.0.0.2", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "4420", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.493 "hdgst": false, 00:20:59.493 "ddgst": false 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 },{ 00:20:59.493 "params": { 00:20:59.493 "name": "Nvme2", 00:20:59.493 "trtype": "tcp", 00:20:59.493 "traddr": "10.0.0.2", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "4420", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:59.493 "hdgst": false, 00:20:59.493 "ddgst": false 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 },{ 00:20:59.493 "params": { 00:20:59.493 "name": "Nvme3", 00:20:59.493 "trtype": "tcp", 00:20:59.493 "traddr": "10.0.0.2", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "4420", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:59.493 "hdgst": false, 00:20:59.493 "ddgst": false 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 },{ 00:20:59.493 "params": { 00:20:59.493 "name": "Nvme4", 00:20:59.493 "trtype": "tcp", 00:20:59.493 "traddr": "10.0.0.2", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "4420", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:59.493 "hdgst": false, 00:20:59.493 "ddgst": false 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 },{ 00:20:59.493 "params": { 00:20:59.493 "name": "Nvme5", 00:20:59.493 "trtype": "tcp", 00:20:59.493 "traddr": "10.0.0.2", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "4420", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:59.493 "hdgst": false, 00:20:59.493 "ddgst": false 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 },{ 00:20:59.493 "params": { 00:20:59.493 "name": "Nvme6", 00:20:59.493 "trtype": "tcp", 00:20:59.493 "traddr": "10.0.0.2", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "4420", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:59.493 "hdgst": false, 00:20:59.493 "ddgst": false 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 },{ 00:20:59.493 "params": { 00:20:59.493 "name": "Nvme7", 00:20:59.493 "trtype": "tcp", 00:20:59.493 "traddr": "10.0.0.2", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "4420", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:59.493 "hdgst": false, 00:20:59.493 "ddgst": false 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 },{ 00:20:59.493 "params": { 00:20:59.493 "name": "Nvme8", 00:20:59.493 "trtype": "tcp", 00:20:59.493 "traddr": "10.0.0.2", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "4420", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:59.493 "hdgst": false, 00:20:59.493 "ddgst": false 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 },{ 00:20:59.493 "params": { 00:20:59.493 "name": "Nvme9", 00:20:59.493 "trtype": "tcp", 00:20:59.493 "traddr": "10.0.0.2", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "4420", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:59.493 "hdgst": false, 00:20:59.493 "ddgst": false 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 },{ 00:20:59.493 "params": { 00:20:59.493 "name": "Nvme10", 00:20:59.493 "trtype": "tcp", 00:20:59.493 "traddr": "10.0.0.2", 00:20:59.493 "adrfam": "ipv4", 00:20:59.493 "trsvcid": "4420", 00:20:59.493 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:59.493 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:59.493 "hdgst": false, 00:20:59.493 "ddgst": false 00:20:59.493 }, 00:20:59.493 "method": "bdev_nvme_attach_controller" 00:20:59.493 }' 00:20:59.493 [2024-07-15 22:37:23.287564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.493 [2024-07-15 22:37:23.361298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.871 Running I/O for 1 seconds... 00:21:02.249 00:21:02.249 Latency(us) 00:21:02.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.249 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.249 Verification LBA range: start 0x0 length 0x400 00:21:02.249 Nvme1n1 : 1.01 254.37 15.90 0.00 0.00 249201.53 18919.96 216097.84 00:21:02.249 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.249 Verification LBA range: start 0x0 length 0x400 00:21:02.249 Nvme2n1 : 1.14 281.56 17.60 0.00 0.00 222111.83 16298.52 217009.64 00:21:02.249 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.249 Verification LBA range: start 0x0 length 0x400 00:21:02.249 Nvme3n1 : 1.10 295.53 18.47 0.00 0.00 207734.52 4701.50 215186.03 00:21:02.249 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.249 Verification LBA range: start 0x0 length 0x400 00:21:02.249 Nvme4n1 : 1.12 284.56 17.79 0.00 0.00 213246.40 13563.10 216097.84 00:21:02.249 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.249 Verification LBA range: start 0x0 length 0x400 00:21:02.249 Nvme5n1 : 1.14 279.65 17.48 0.00 0.00 214264.79 18236.10 212450.62 00:21:02.249 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.249 Verification LBA range: start 0x0 length 0x400 00:21:02.249 Nvme6n1 : 1.14 280.27 17.52 0.00 0.00 210542.24 17096.35 212450.62 00:21:02.249 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.249 Verification LBA range: start 0x0 length 0x400 00:21:02.249 Nvme7n1 : 1.13 283.02 17.69 0.00 0.00 205240.23 18578.03 219745.06 00:21:02.249 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.249 Verification LBA range: start 0x0 length 0x400 00:21:02.249 Nvme8n1 : 1.15 278.83 17.43 0.00 0.00 205286.18 19603.81 218833.25 00:21:02.249 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.249 Verification LBA range: start 0x0 length 0x400 00:21:02.249 Nvme9n1 : 1.16 276.53 17.28 0.00 0.00 204178.30 16640.45 238892.97 00:21:02.249 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:02.249 Verification LBA range: start 0x0 length 0x400 00:21:02.249 Nvme10n1 : 1.15 278.35 17.40 0.00 0.00 199376.01 15500.69 235245.75 00:21:02.249 =================================================================================================================== 00:21:02.249 Total : 2792.65 174.54 0.00 0.00 212375.89 4701.50 238892.97 00:21:02.249 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:02.249 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:02.249 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:02.249 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:02.249 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:02.508 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:02.508 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:02.508 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:02.508 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:02.508 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.508 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:02.509 rmmod nvme_tcp 00:21:02.509 rmmod nvme_fabrics 00:21:02.509 rmmod nvme_keyring 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 62994 ']' 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 62994 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 62994 ']' 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 62994 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62994 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62994' 00:21:02.509 killing process with pid 62994 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 62994 00:21:02.509 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 62994 00:21:02.768 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:02.768 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:02.768 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:02.768 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:02.768 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:02.768 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.768 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.768 22:37:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.305 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:05.306 00:21:05.306 real 0m14.630s 00:21:05.306 user 0m33.981s 00:21:05.306 sys 0m5.131s 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:05.306 ************************************ 00:21:05.306 END TEST nvmf_shutdown_tc1 00:21:05.306 ************************************ 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:05.306 ************************************ 00:21:05.306 START TEST nvmf_shutdown_tc2 00:21:05.306 ************************************ 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:05.306 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:05.306 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:05.306 Found net devices under 0000:86:00.0: cvl_0_0 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:05.306 Found net devices under 0000:86:00.1: cvl_0_1 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:05.306 22:37:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.306 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.306 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.306 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:05.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:21:05.306 00:21:05.306 --- 10.0.0.2 ping statistics --- 00:21:05.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.306 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:21:05.306 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:21:05.306 00:21:05.306 --- 10.0.0.1 ping statistics --- 00:21:05.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.306 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:21:05.306 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.306 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=64786 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 64786 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 64786 ']' 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:05.307 22:37:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:05.307 [2024-07-15 22:37:29.184168] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:05.307 [2024-07-15 22:37:29.184210] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.307 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.307 [2024-07-15 22:37:29.242117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.566 [2024-07-15 22:37:29.322580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.566 [2024-07-15 22:37:29.322614] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.566 [2024-07-15 22:37:29.322622] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.566 [2024-07-15 22:37:29.322627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.566 [2024-07-15 22:37:29.322633] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.566 [2024-07-15 22:37:29.322750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.566 [2024-07-15 22:37:29.322840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.566 [2024-07-15 22:37:29.322946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.566 [2024-07-15 22:37:29.322947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:06.134 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.135 [2024-07-15 22:37:30.048123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:06.135 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.394 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.394 Malloc1 00:21:06.394 [2024-07-15 22:37:30.144184] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.394 Malloc2 00:21:06.394 Malloc3 00:21:06.394 Malloc4 00:21:06.394 Malloc5 00:21:06.394 Malloc6 00:21:06.653 Malloc7 00:21:06.653 Malloc8 00:21:06.653 Malloc9 00:21:06.653 Malloc10 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=65065 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 65065 /var/tmp/bdevperf.sock 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 65065 ']' 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.653 { 00:21:06.653 "params": { 00:21:06.653 "name": "Nvme$subsystem", 00:21:06.653 "trtype": "$TEST_TRANSPORT", 00:21:06.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.653 "adrfam": "ipv4", 00:21:06.653 "trsvcid": "$NVMF_PORT", 00:21:06.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.653 "hdgst": ${hdgst:-false}, 00:21:06.653 "ddgst": ${ddgst:-false} 00:21:06.653 }, 00:21:06.653 "method": "bdev_nvme_attach_controller" 00:21:06.653 } 00:21:06.653 EOF 00:21:06.653 )") 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.653 { 00:21:06.653 "params": { 00:21:06.653 "name": "Nvme$subsystem", 00:21:06.653 "trtype": "$TEST_TRANSPORT", 00:21:06.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.653 "adrfam": "ipv4", 00:21:06.653 "trsvcid": "$NVMF_PORT", 00:21:06.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.653 "hdgst": ${hdgst:-false}, 00:21:06.653 "ddgst": ${ddgst:-false} 00:21:06.653 }, 00:21:06.653 "method": "bdev_nvme_attach_controller" 00:21:06.653 } 00:21:06.653 EOF 00:21:06.653 )") 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.653 { 00:21:06.653 "params": { 00:21:06.653 "name": "Nvme$subsystem", 00:21:06.653 "trtype": "$TEST_TRANSPORT", 00:21:06.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.653 "adrfam": "ipv4", 00:21:06.653 "trsvcid": "$NVMF_PORT", 00:21:06.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.653 "hdgst": ${hdgst:-false}, 00:21:06.653 "ddgst": ${ddgst:-false} 00:21:06.653 }, 00:21:06.653 "method": "bdev_nvme_attach_controller" 00:21:06.653 } 00:21:06.653 EOF 00:21:06.653 )") 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.653 { 00:21:06.653 "params": { 00:21:06.653 "name": "Nvme$subsystem", 00:21:06.653 "trtype": "$TEST_TRANSPORT", 00:21:06.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.653 "adrfam": "ipv4", 00:21:06.653 "trsvcid": "$NVMF_PORT", 00:21:06.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.653 "hdgst": ${hdgst:-false}, 00:21:06.653 "ddgst": ${ddgst:-false} 00:21:06.653 }, 00:21:06.653 "method": "bdev_nvme_attach_controller" 00:21:06.653 } 00:21:06.653 EOF 00:21:06.653 )") 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.653 { 00:21:06.653 "params": { 00:21:06.653 "name": "Nvme$subsystem", 00:21:06.653 "trtype": "$TEST_TRANSPORT", 00:21:06.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.653 "adrfam": "ipv4", 00:21:06.653 "trsvcid": "$NVMF_PORT", 00:21:06.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.653 "hdgst": ${hdgst:-false}, 00:21:06.653 "ddgst": ${ddgst:-false} 00:21:06.653 }, 00:21:06.653 "method": "bdev_nvme_attach_controller" 00:21:06.653 } 00:21:06.653 EOF 00:21:06.653 )") 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.653 { 00:21:06.653 "params": { 00:21:06.653 "name": "Nvme$subsystem", 00:21:06.653 "trtype": "$TEST_TRANSPORT", 00:21:06.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.653 "adrfam": "ipv4", 00:21:06.653 "trsvcid": "$NVMF_PORT", 00:21:06.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.653 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.653 "hdgst": ${hdgst:-false}, 00:21:06.653 "ddgst": ${ddgst:-false} 00:21:06.653 }, 00:21:06.653 "method": "bdev_nvme_attach_controller" 00:21:06.653 } 00:21:06.653 EOF 00:21:06.653 )") 00:21:06.653 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:06.653 [2024-07-15 22:37:30.610525] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:06.653 [2024-07-15 22:37:30.610573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65065 ] 00:21:06.654 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.654 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.654 { 00:21:06.654 "params": { 00:21:06.654 "name": "Nvme$subsystem", 00:21:06.654 "trtype": "$TEST_TRANSPORT", 00:21:06.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.654 "adrfam": "ipv4", 00:21:06.654 "trsvcid": "$NVMF_PORT", 00:21:06.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.654 "hdgst": ${hdgst:-false}, 00:21:06.654 "ddgst": ${ddgst:-false} 00:21:06.654 }, 00:21:06.654 "method": "bdev_nvme_attach_controller" 00:21:06.654 } 00:21:06.654 EOF 00:21:06.654 )") 00:21:06.654 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:06.654 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.654 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.654 { 00:21:06.654 "params": { 00:21:06.654 "name": "Nvme$subsystem", 00:21:06.654 "trtype": "$TEST_TRANSPORT", 00:21:06.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.654 "adrfam": "ipv4", 00:21:06.654 "trsvcid": "$NVMF_PORT", 00:21:06.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.654 "hdgst": ${hdgst:-false}, 00:21:06.654 "ddgst": ${ddgst:-false} 00:21:06.654 }, 00:21:06.654 "method": "bdev_nvme_attach_controller" 00:21:06.654 } 00:21:06.654 EOF 00:21:06.654 )") 00:21:06.654 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:06.913 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.913 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.913 { 00:21:06.913 "params": { 00:21:06.913 "name": "Nvme$subsystem", 00:21:06.913 "trtype": "$TEST_TRANSPORT", 00:21:06.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.913 "adrfam": "ipv4", 00:21:06.913 "trsvcid": "$NVMF_PORT", 00:21:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.913 "hdgst": ${hdgst:-false}, 00:21:06.913 "ddgst": ${ddgst:-false} 00:21:06.913 }, 00:21:06.913 "method": "bdev_nvme_attach_controller" 00:21:06.913 } 00:21:06.913 EOF 00:21:06.913 )") 00:21:06.913 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:06.913 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.913 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.913 { 00:21:06.913 "params": { 00:21:06.913 "name": "Nvme$subsystem", 00:21:06.913 "trtype": "$TEST_TRANSPORT", 00:21:06.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.913 "adrfam": "ipv4", 00:21:06.913 "trsvcid": "$NVMF_PORT", 00:21:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.913 "hdgst": ${hdgst:-false}, 00:21:06.913 "ddgst": ${ddgst:-false} 00:21:06.913 }, 00:21:06.913 "method": "bdev_nvme_attach_controller" 00:21:06.913 } 00:21:06.913 EOF 00:21:06.913 )") 00:21:06.913 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:06.913 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.913 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:06.913 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:06.913 22:37:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:06.913 "params": { 00:21:06.913 "name": "Nvme1", 00:21:06.913 "trtype": "tcp", 00:21:06.913 "traddr": "10.0.0.2", 00:21:06.913 "adrfam": "ipv4", 00:21:06.913 "trsvcid": "4420", 00:21:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:06.913 "hdgst": false, 00:21:06.913 "ddgst": false 00:21:06.913 }, 00:21:06.913 "method": "bdev_nvme_attach_controller" 00:21:06.913 },{ 00:21:06.913 "params": { 00:21:06.913 "name": "Nvme2", 00:21:06.913 "trtype": "tcp", 00:21:06.913 "traddr": "10.0.0.2", 00:21:06.913 "adrfam": "ipv4", 00:21:06.913 "trsvcid": "4420", 00:21:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:06.913 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:06.913 "hdgst": false, 00:21:06.913 "ddgst": false 00:21:06.913 }, 00:21:06.913 "method": "bdev_nvme_attach_controller" 00:21:06.913 },{ 00:21:06.913 "params": { 00:21:06.913 "name": "Nvme3", 00:21:06.913 "trtype": "tcp", 00:21:06.913 "traddr": "10.0.0.2", 00:21:06.913 "adrfam": "ipv4", 00:21:06.913 "trsvcid": "4420", 00:21:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:06.913 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:06.913 "hdgst": false, 00:21:06.913 "ddgst": false 00:21:06.913 }, 00:21:06.913 "method": "bdev_nvme_attach_controller" 00:21:06.913 },{ 00:21:06.913 "params": { 00:21:06.913 "name": "Nvme4", 00:21:06.913 "trtype": "tcp", 00:21:06.913 "traddr": "10.0.0.2", 00:21:06.913 "adrfam": "ipv4", 00:21:06.913 "trsvcid": "4420", 00:21:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:06.913 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:06.913 "hdgst": false, 00:21:06.913 "ddgst": false 00:21:06.913 }, 00:21:06.913 "method": "bdev_nvme_attach_controller" 00:21:06.913 },{ 00:21:06.913 "params": { 00:21:06.913 "name": "Nvme5", 00:21:06.913 "trtype": "tcp", 00:21:06.913 "traddr": "10.0.0.2", 00:21:06.913 "adrfam": "ipv4", 00:21:06.913 "trsvcid": "4420", 00:21:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:06.913 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:06.913 "hdgst": false, 00:21:06.913 "ddgst": false 00:21:06.913 }, 00:21:06.913 "method": "bdev_nvme_attach_controller" 00:21:06.913 },{ 00:21:06.913 "params": { 00:21:06.913 "name": "Nvme6", 00:21:06.913 "trtype": "tcp", 00:21:06.913 "traddr": "10.0.0.2", 00:21:06.913 "adrfam": "ipv4", 00:21:06.913 "trsvcid": "4420", 00:21:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:06.913 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:06.913 "hdgst": false, 00:21:06.913 "ddgst": false 00:21:06.913 }, 00:21:06.913 "method": "bdev_nvme_attach_controller" 00:21:06.913 },{ 00:21:06.913 "params": { 00:21:06.913 "name": "Nvme7", 00:21:06.913 "trtype": "tcp", 00:21:06.913 "traddr": "10.0.0.2", 00:21:06.913 "adrfam": "ipv4", 00:21:06.913 "trsvcid": "4420", 00:21:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:06.913 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:06.913 "hdgst": false, 00:21:06.913 "ddgst": false 00:21:06.913 }, 00:21:06.913 "method": "bdev_nvme_attach_controller" 00:21:06.913 },{ 00:21:06.913 "params": { 00:21:06.913 "name": "Nvme8", 00:21:06.913 "trtype": "tcp", 00:21:06.913 "traddr": "10.0.0.2", 00:21:06.913 "adrfam": "ipv4", 00:21:06.913 "trsvcid": "4420", 00:21:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:06.913 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:06.913 "hdgst": false, 00:21:06.913 "ddgst": false 00:21:06.913 }, 00:21:06.913 "method": "bdev_nvme_attach_controller" 00:21:06.913 },{ 00:21:06.913 "params": { 00:21:06.913 "name": "Nvme9", 00:21:06.913 "trtype": "tcp", 00:21:06.913 "traddr": "10.0.0.2", 00:21:06.913 "adrfam": "ipv4", 00:21:06.913 "trsvcid": "4420", 00:21:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:06.913 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:06.913 "hdgst": false, 00:21:06.913 "ddgst": false 00:21:06.913 }, 00:21:06.913 "method": "bdev_nvme_attach_controller" 00:21:06.913 },{ 00:21:06.913 "params": { 00:21:06.913 "name": "Nvme10", 00:21:06.913 "trtype": "tcp", 00:21:06.913 "traddr": "10.0.0.2", 00:21:06.913 "adrfam": "ipv4", 00:21:06.913 "trsvcid": "4420", 00:21:06.913 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:06.913 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:06.913 "hdgst": false, 00:21:06.913 "ddgst": false 00:21:06.913 }, 00:21:06.913 "method": "bdev_nvme_attach_controller" 00:21:06.913 }' 00:21:06.913 [2024-07-15 22:37:30.666982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.913 [2024-07-15 22:37:30.740393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.811 Running I/O for 10 seconds... 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=200 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 200 -ge 100 ']' 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 65065 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 65065 ']' 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 65065 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65065 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65065' 00:21:09.376 killing process with pid 65065 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 65065 00:21:09.376 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 65065 00:21:09.634 Received shutdown signal, test time was about 0.899262 seconds 00:21:09.634 00:21:09.634 Latency(us) 00:21:09.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.634 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.634 Verification LBA range: start 0x0 length 0x400 00:21:09.634 Nvme1n1 : 0.88 297.39 18.59 0.00 0.00 212252.41 4359.57 199685.34 00:21:09.634 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.634 Verification LBA range: start 0x0 length 0x400 00:21:09.634 Nvme2n1 : 0.89 288.54 18.03 0.00 0.00 215468.74 20059.71 214274.23 00:21:09.634 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.634 Verification LBA range: start 0x0 length 0x400 00:21:09.634 Nvme3n1 : 0.87 292.64 18.29 0.00 0.00 207538.53 16070.57 213362.42 00:21:09.634 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.634 Verification LBA range: start 0x0 length 0x400 00:21:09.634 Nvme4n1 : 0.88 290.73 18.17 0.00 0.00 205824.67 16184.54 213362.42 00:21:09.634 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.634 Verification LBA range: start 0x0 length 0x400 00:21:09.634 Nvme5n1 : 0.89 287.58 17.97 0.00 0.00 204362.57 18236.10 211538.81 00:21:09.634 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.634 Verification LBA range: start 0x0 length 0x400 00:21:09.634 Nvme6n1 : 0.90 285.96 17.87 0.00 0.00 201474.23 32141.13 216097.84 00:21:09.634 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.634 Verification LBA range: start 0x0 length 0x400 00:21:09.635 Nvme7n1 : 0.90 284.89 17.81 0.00 0.00 198513.64 14132.98 218833.25 00:21:09.635 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.635 Verification LBA range: start 0x0 length 0x400 00:21:09.635 Nvme8n1 : 0.90 285.71 17.86 0.00 0.00 193882.16 24276.81 212450.62 00:21:09.635 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.635 Verification LBA range: start 0x0 length 0x400 00:21:09.635 Nvme9n1 : 0.86 222.71 13.92 0.00 0.00 242414.64 18008.15 222480.47 00:21:09.635 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.635 Verification LBA range: start 0x0 length 0x400 00:21:09.635 Nvme10n1 : 0.87 221.16 13.82 0.00 0.00 239061.26 18919.96 242540.19 00:21:09.635 =================================================================================================================== 00:21:09.635 Total : 2757.32 172.33 0.00 0.00 210574.38 4359.57 242540.19 00:21:09.635 22:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 64786 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.010 rmmod nvme_tcp 00:21:11.010 rmmod nvme_fabrics 00:21:11.010 rmmod nvme_keyring 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 64786 ']' 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 64786 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 64786 ']' 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 64786 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64786 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64786' 00:21:11.010 killing process with pid 64786 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 64786 00:21:11.010 22:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 64786 00:21:11.269 22:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.270 22:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.270 22:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.270 22:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.270 22:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.270 22:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.270 22:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.270 22:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.203 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:13.203 00:21:13.203 real 0m8.267s 00:21:13.203 user 0m25.818s 00:21:13.203 sys 0m1.311s 00:21:13.203 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:13.203 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.203 ************************************ 00:21:13.203 END TEST nvmf_shutdown_tc2 00:21:13.203 ************************************ 00:21:13.203 22:37:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:13.203 22:37:37 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:13.203 22:37:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:13.203 22:37:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.203 22:37:37 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:13.462 ************************************ 00:21:13.462 START TEST nvmf_shutdown_tc3 00:21:13.462 ************************************ 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:13.462 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:13.463 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:13.463 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:13.463 Found net devices under 0000:86:00.0: cvl_0_0 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:13.463 Found net devices under 0000:86:00.1: cvl_0_1 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.463 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:13.723 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.723 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:21:13.723 00:21:13.723 --- 10.0.0.2 ping statistics --- 00:21:13.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.723 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.723 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.723 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:21:13.723 00:21:13.723 --- 10.0.0.1 ping statistics --- 00:21:13.723 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.723 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=66326 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 66326 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 66326 ']' 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.723 22:37:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:13.723 [2024-07-15 22:37:37.559528] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:13.723 [2024-07-15 22:37:37.559576] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.723 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.723 [2024-07-15 22:37:37.616786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:13.982 [2024-07-15 22:37:37.697410] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.982 [2024-07-15 22:37:37.697442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.982 [2024-07-15 22:37:37.697449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.982 [2024-07-15 22:37:37.697455] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.982 [2024-07-15 22:37:37.697460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.982 [2024-07-15 22:37:37.697545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.982 [2024-07-15 22:37:37.697638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.982 [2024-07-15 22:37:37.697744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.982 [2024-07-15 22:37:37.697745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:14.550 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.550 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:14.550 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:14.550 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:14.550 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:14.550 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.550 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:14.550 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.550 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:14.551 [2024-07-15 22:37:38.405043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.551 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:14.551 Malloc1 00:21:14.551 [2024-07-15 22:37:38.500815] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.808 Malloc2 00:21:14.808 Malloc3 00:21:14.808 Malloc4 00:21:14.808 Malloc5 00:21:14.808 Malloc6 00:21:14.808 Malloc7 00:21:15.089 Malloc8 00:21:15.089 Malloc9 00:21:15.089 Malloc10 00:21:15.089 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.089 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:15.089 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:15.089 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.089 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=66611 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 66611 /var/tmp/bdevperf.sock 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 66611 ']' 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.090 { 00:21:15.090 "params": { 00:21:15.090 "name": "Nvme$subsystem", 00:21:15.090 "trtype": "$TEST_TRANSPORT", 00:21:15.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.090 "adrfam": "ipv4", 00:21:15.090 "trsvcid": "$NVMF_PORT", 00:21:15.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.090 "hdgst": ${hdgst:-false}, 00:21:15.090 "ddgst": ${ddgst:-false} 00:21:15.090 }, 00:21:15.090 "method": "bdev_nvme_attach_controller" 00:21:15.090 } 00:21:15.090 EOF 00:21:15.090 )") 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.090 { 00:21:15.090 "params": { 00:21:15.090 "name": "Nvme$subsystem", 00:21:15.090 "trtype": "$TEST_TRANSPORT", 00:21:15.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.090 "adrfam": "ipv4", 00:21:15.090 "trsvcid": "$NVMF_PORT", 00:21:15.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.090 "hdgst": ${hdgst:-false}, 00:21:15.090 "ddgst": ${ddgst:-false} 00:21:15.090 }, 00:21:15.090 "method": "bdev_nvme_attach_controller" 00:21:15.090 } 00:21:15.090 EOF 00:21:15.090 )") 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.090 { 00:21:15.090 "params": { 00:21:15.090 "name": "Nvme$subsystem", 00:21:15.090 "trtype": "$TEST_TRANSPORT", 00:21:15.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.090 "adrfam": "ipv4", 00:21:15.090 "trsvcid": "$NVMF_PORT", 00:21:15.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.090 "hdgst": ${hdgst:-false}, 00:21:15.090 "ddgst": ${ddgst:-false} 00:21:15.090 }, 00:21:15.090 "method": "bdev_nvme_attach_controller" 00:21:15.090 } 00:21:15.090 EOF 00:21:15.090 )") 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.090 { 00:21:15.090 "params": { 00:21:15.090 "name": "Nvme$subsystem", 00:21:15.090 "trtype": "$TEST_TRANSPORT", 00:21:15.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.090 "adrfam": "ipv4", 00:21:15.090 "trsvcid": "$NVMF_PORT", 00:21:15.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.090 "hdgst": ${hdgst:-false}, 00:21:15.090 "ddgst": ${ddgst:-false} 00:21:15.090 }, 00:21:15.090 "method": "bdev_nvme_attach_controller" 00:21:15.090 } 00:21:15.090 EOF 00:21:15.090 )") 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.090 { 00:21:15.090 "params": { 00:21:15.090 "name": "Nvme$subsystem", 00:21:15.090 "trtype": "$TEST_TRANSPORT", 00:21:15.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.090 "adrfam": "ipv4", 00:21:15.090 "trsvcid": "$NVMF_PORT", 00:21:15.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.090 "hdgst": ${hdgst:-false}, 00:21:15.090 "ddgst": ${ddgst:-false} 00:21:15.090 }, 00:21:15.090 "method": "bdev_nvme_attach_controller" 00:21:15.090 } 00:21:15.090 EOF 00:21:15.090 )") 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.090 { 00:21:15.090 "params": { 00:21:15.090 "name": "Nvme$subsystem", 00:21:15.090 "trtype": "$TEST_TRANSPORT", 00:21:15.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.090 "adrfam": "ipv4", 00:21:15.090 "trsvcid": "$NVMF_PORT", 00:21:15.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.090 "hdgst": ${hdgst:-false}, 00:21:15.090 "ddgst": ${ddgst:-false} 00:21:15.090 }, 00:21:15.090 "method": "bdev_nvme_attach_controller" 00:21:15.090 } 00:21:15.090 EOF 00:21:15.090 )") 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.090 { 00:21:15.090 "params": { 00:21:15.090 "name": "Nvme$subsystem", 00:21:15.090 "trtype": "$TEST_TRANSPORT", 00:21:15.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.090 "adrfam": "ipv4", 00:21:15.090 "trsvcid": "$NVMF_PORT", 00:21:15.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.090 "hdgst": ${hdgst:-false}, 00:21:15.090 "ddgst": ${ddgst:-false} 00:21:15.090 }, 00:21:15.090 "method": "bdev_nvme_attach_controller" 00:21:15.090 } 00:21:15.090 EOF 00:21:15.090 )") 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.090 [2024-07-15 22:37:38.972178] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:15.090 [2024-07-15 22:37:38.972234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66611 ] 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.090 { 00:21:15.090 "params": { 00:21:15.090 "name": "Nvme$subsystem", 00:21:15.090 "trtype": "$TEST_TRANSPORT", 00:21:15.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.090 "adrfam": "ipv4", 00:21:15.090 "trsvcid": "$NVMF_PORT", 00:21:15.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.090 "hdgst": ${hdgst:-false}, 00:21:15.090 "ddgst": ${ddgst:-false} 00:21:15.090 }, 00:21:15.090 "method": "bdev_nvme_attach_controller" 00:21:15.090 } 00:21:15.090 EOF 00:21:15.090 )") 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.090 { 00:21:15.090 "params": { 00:21:15.090 "name": "Nvme$subsystem", 00:21:15.090 "trtype": "$TEST_TRANSPORT", 00:21:15.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.090 "adrfam": "ipv4", 00:21:15.090 "trsvcid": "$NVMF_PORT", 00:21:15.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.090 "hdgst": ${hdgst:-false}, 00:21:15.090 "ddgst": ${ddgst:-false} 00:21:15.090 }, 00:21:15.090 "method": "bdev_nvme_attach_controller" 00:21:15.090 } 00:21:15.090 EOF 00:21:15.090 )") 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:15.090 { 00:21:15.090 "params": { 00:21:15.090 "name": "Nvme$subsystem", 00:21:15.090 "trtype": "$TEST_TRANSPORT", 00:21:15.090 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:15.090 "adrfam": "ipv4", 00:21:15.090 "trsvcid": "$NVMF_PORT", 00:21:15.090 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:15.090 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:15.090 "hdgst": ${hdgst:-false}, 00:21:15.090 "ddgst": ${ddgst:-false} 00:21:15.090 }, 00:21:15.090 "method": "bdev_nvme_attach_controller" 00:21:15.090 } 00:21:15.090 EOF 00:21:15.090 )") 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:15.090 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:15.090 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.091 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:15.091 22:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:15.091 "params": { 00:21:15.091 "name": "Nvme1", 00:21:15.091 "trtype": "tcp", 00:21:15.091 "traddr": "10.0.0.2", 00:21:15.091 "adrfam": "ipv4", 00:21:15.091 "trsvcid": "4420", 00:21:15.091 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.091 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.091 "hdgst": false, 00:21:15.091 "ddgst": false 00:21:15.091 }, 00:21:15.091 "method": "bdev_nvme_attach_controller" 00:21:15.091 },{ 00:21:15.091 "params": { 00:21:15.091 "name": "Nvme2", 00:21:15.091 "trtype": "tcp", 00:21:15.091 "traddr": "10.0.0.2", 00:21:15.091 "adrfam": "ipv4", 00:21:15.091 "trsvcid": "4420", 00:21:15.091 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:15.091 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:15.091 "hdgst": false, 00:21:15.091 "ddgst": false 00:21:15.091 }, 00:21:15.091 "method": "bdev_nvme_attach_controller" 00:21:15.091 },{ 00:21:15.091 "params": { 00:21:15.091 "name": "Nvme3", 00:21:15.091 "trtype": "tcp", 00:21:15.091 "traddr": "10.0.0.2", 00:21:15.091 "adrfam": "ipv4", 00:21:15.091 "trsvcid": "4420", 00:21:15.091 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:15.091 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:15.091 "hdgst": false, 00:21:15.091 "ddgst": false 00:21:15.091 }, 00:21:15.091 "method": "bdev_nvme_attach_controller" 00:21:15.091 },{ 00:21:15.091 "params": { 00:21:15.091 "name": "Nvme4", 00:21:15.091 "trtype": "tcp", 00:21:15.091 "traddr": "10.0.0.2", 00:21:15.091 "adrfam": "ipv4", 00:21:15.091 "trsvcid": "4420", 00:21:15.091 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:15.091 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:15.091 "hdgst": false, 00:21:15.091 "ddgst": false 00:21:15.091 }, 00:21:15.091 "method": "bdev_nvme_attach_controller" 00:21:15.091 },{ 00:21:15.091 "params": { 00:21:15.091 "name": "Nvme5", 00:21:15.091 "trtype": "tcp", 00:21:15.091 "traddr": "10.0.0.2", 00:21:15.091 "adrfam": "ipv4", 00:21:15.091 "trsvcid": "4420", 00:21:15.091 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:15.091 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:15.091 "hdgst": false, 00:21:15.091 "ddgst": false 00:21:15.091 }, 00:21:15.091 "method": "bdev_nvme_attach_controller" 00:21:15.091 },{ 00:21:15.091 "params": { 00:21:15.091 "name": "Nvme6", 00:21:15.091 "trtype": "tcp", 00:21:15.091 "traddr": "10.0.0.2", 00:21:15.091 "adrfam": "ipv4", 00:21:15.091 "trsvcid": "4420", 00:21:15.091 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:15.091 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:15.091 "hdgst": false, 00:21:15.091 "ddgst": false 00:21:15.091 }, 00:21:15.091 "method": "bdev_nvme_attach_controller" 00:21:15.091 },{ 00:21:15.091 "params": { 00:21:15.091 "name": "Nvme7", 00:21:15.091 "trtype": "tcp", 00:21:15.091 "traddr": "10.0.0.2", 00:21:15.091 "adrfam": "ipv4", 00:21:15.091 "trsvcid": "4420", 00:21:15.091 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:15.091 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:15.091 "hdgst": false, 00:21:15.091 "ddgst": false 00:21:15.091 }, 00:21:15.091 "method": "bdev_nvme_attach_controller" 00:21:15.091 },{ 00:21:15.091 "params": { 00:21:15.091 "name": "Nvme8", 00:21:15.091 "trtype": "tcp", 00:21:15.091 "traddr": "10.0.0.2", 00:21:15.091 "adrfam": "ipv4", 00:21:15.091 "trsvcid": "4420", 00:21:15.091 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:15.091 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:15.091 "hdgst": false, 00:21:15.091 "ddgst": false 00:21:15.091 }, 00:21:15.091 "method": "bdev_nvme_attach_controller" 00:21:15.091 },{ 00:21:15.091 "params": { 00:21:15.091 "name": "Nvme9", 00:21:15.091 "trtype": "tcp", 00:21:15.091 "traddr": "10.0.0.2", 00:21:15.091 "adrfam": "ipv4", 00:21:15.091 "trsvcid": "4420", 00:21:15.091 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:15.091 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:15.091 "hdgst": false, 00:21:15.091 "ddgst": false 00:21:15.091 }, 00:21:15.091 "method": "bdev_nvme_attach_controller" 00:21:15.091 },{ 00:21:15.091 "params": { 00:21:15.091 "name": "Nvme10", 00:21:15.091 "trtype": "tcp", 00:21:15.091 "traddr": "10.0.0.2", 00:21:15.091 "adrfam": "ipv4", 00:21:15.091 "trsvcid": "4420", 00:21:15.091 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:15.091 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:15.091 "hdgst": false, 00:21:15.091 "ddgst": false 00:21:15.091 }, 00:21:15.091 "method": "bdev_nvme_attach_controller" 00:21:15.091 }' 00:21:15.091 [2024-07-15 22:37:39.028698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.359 [2024-07-15 22:37:39.102523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.735 Running I/O for 10 seconds... 00:21:16.735 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:16.735 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:16.735 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:16.735 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.735 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.735 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.735 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.735 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:16.735 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:16.735 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:16.735 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:16.736 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:16.736 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:16.736 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:16.736 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:16.736 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:16.736 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.736 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.736 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.736 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:16.736 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:16.736 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:16.994 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:16.994 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:16.994 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:16.994 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:16.994 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.994 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.253 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.253 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:17.253 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:17.253 22:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=137 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 137 -ge 100 ']' 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 66326 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 66326 ']' 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 66326 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66326 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66326' 00:21:17.527 killing process with pid 66326 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 66326 00:21:17.527 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 66326 00:21:17.527 [2024-07-15 22:37:41.332675] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.527 [2024-07-15 22:37:41.332736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.527 [2024-07-15 22:37:41.332744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.527 [2024-07-15 22:37:41.332750] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.527 [2024-07-15 22:37:41.332757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.527 [2024-07-15 22:37:41.332764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.527 [2024-07-15 22:37:41.332770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.527 [2024-07-15 22:37:41.332777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.527 [2024-07-15 22:37:41.332783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.527 [2024-07-15 22:37:41.332789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.527 [2024-07-15 22:37:41.332796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.527 [2024-07-15 22:37:41.332802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.527 [2024-07-15 22:37:41.332808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332870] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332876] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332882] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332915] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332940] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332947] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.332995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333039] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333051] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333069] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333076] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333082] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333089] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333100] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333107] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333113] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.333119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cad0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.334302] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52f3a0 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335262] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335269] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335276] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335315] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335376] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335408] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335460] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335479] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.528 [2024-07-15 22:37:41.335499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335525] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335551] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335558] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335571] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335577] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335595] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335620] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.335650] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52cf70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337619] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337626] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337658] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337743] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337770] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337776] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337818] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337857] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337923] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337929] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337934] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337960] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337972] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.337996] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52d8d0 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.338702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.529 [2024-07-15 22:37:41.338719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338863] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338900] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338950] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338956] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338976] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338982] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.338994] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339000] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339007] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339012] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339019] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339025] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339031] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339048] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339055] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339061] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.339098] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52dd70 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340579] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340604] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340628] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340651] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340657] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340669] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.530 [2024-07-15 22:37:41.340685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340709] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340734] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340740] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340790] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340796] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340837] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340843] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340869] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340881] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340909] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340921] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.340933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52e6d0 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341747] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341886] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341903] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.531 [2024-07-15 22:37:41.341948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.341953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.341959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.341964] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.341971] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.341978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.341984] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.341989] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.341995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342006] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342020] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342026] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342045] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342073] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342079] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342085] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.342102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x52ea40 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.367074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3190 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.367232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214b660 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.367318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc7d60 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.367403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbd1d0 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.367484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc4b30 is same with the state(5) to be set 00:21:17.532 [2024-07-15 22:37:41.367565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.532 [2024-07-15 22:37:41.367586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.532 [2024-07-15 22:37:41.367593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acf340 is same with the state(5) to be set 00:21:17.533 [2024-07-15 22:37:41.367644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155050 is same with the state(5) to be set 00:21:17.533 [2024-07-15 22:37:41.367723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c8d0 is same with the state(5) to be set 00:21:17.533 [2024-07-15 22:37:41.367803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80c70 is same with the state(5) to be set 00:21:17.533 [2024-07-15 22:37:41.367884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.533 [2024-07-15 22:37:41.367932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.367938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2135890 is same with the state(5) to be set 00:21:17.533 [2024-07-15 22:37:41.382824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.382869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.382888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.382895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.382904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.382911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.382920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.382926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.382934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.382941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.382950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.382956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.382964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.382970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.382979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.382985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.382998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.383005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.383014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.383020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.383028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.383034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.383043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.383049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.383057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.383064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.383072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.383079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.383087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.383094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.383102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.383108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.383116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.383123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.383131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.383137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.383145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.383152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.383161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.533 [2024-07-15 22:37:41.383167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.533 [2024-07-15 22:37:41.383175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.534 [2024-07-15 22:37:41.383794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.534 [2024-07-15 22:37:41.383802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.383809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.383817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.383823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.383831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ca7a0 is same with the state(5) to be set 00:21:17.535 [2024-07-15 22:37:41.383892] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20ca7a0 was disconnected and freed. reset controller. 00:21:17.535 [2024-07-15 22:37:41.383931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.383940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.383950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.383957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.383966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.383973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.383981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.383987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.383996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.535 [2024-07-15 22:37:41.384501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.535 [2024-07-15 22:37:41.384509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.384887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.384894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cbc30 is same with the state(5) to be set 00:21:17.536 [2024-07-15 22:37:41.385809] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20cbc30 was disconnected and freed. reset controller. 00:21:17.536 [2024-07-15 22:37:41.385861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa3190 (9): Bad file descriptor 00:21:17.536 [2024-07-15 22:37:41.385884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214b660 (9): Bad file descriptor 00:21:17.536 [2024-07-15 22:37:41.385900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc7d60 (9): Bad file descriptor 00:21:17.536 [2024-07-15 22:37:41.385911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbd1d0 (9): Bad file descriptor 00:21:17.536 [2024-07-15 22:37:41.385925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc4b30 (9): Bad file descriptor 00:21:17.536 [2024-07-15 22:37:41.385939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acf340 (9): Bad file descriptor 00:21:17.536 [2024-07-15 22:37:41.385957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155050 (9): Bad file descriptor 00:21:17.536 [2024-07-15 22:37:41.385971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214c8d0 (9): Bad file descriptor 00:21:17.536 [2024-07-15 22:37:41.385985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f80c70 (9): Bad file descriptor 00:21:17.536 [2024-07-15 22:37:41.385998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2135890 (9): Bad file descriptor 00:21:17.536 [2024-07-15 22:37:41.386145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.386159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.386171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.386178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.536 [2024-07-15 22:37:41.386186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.536 [2024-07-15 22:37:41.386193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.386764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.386770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.397097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.397112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.397121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.397128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.397136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.397144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.537 [2024-07-15 22:37:41.397152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.537 [2024-07-15 22:37:41.397159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397525] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2048900 was disconnected and freed. reset controller. 00:21:17.538 [2024-07-15 22:37:41.397715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.538 [2024-07-15 22:37:41.397951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.538 [2024-07-15 22:37:41.397957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.397966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.397972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.397979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.397986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.397994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.539 [2024-07-15 22:37:41.398587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.539 [2024-07-15 22:37:41.398596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.398606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.398613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.398621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.398627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.398635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.398642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.398649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.398656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.398665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.398671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.398736] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f7c440 was disconnected and freed. reset controller. 00:21:17.540 [2024-07-15 22:37:41.400699] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.540 [2024-07-15 22:37:41.400728] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.540 [2024-07-15 22:37:41.400743] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.540 [2024-07-15 22:37:41.400762] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.540 [2024-07-15 22:37:41.404490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:17.540 [2024-07-15 22:37:41.404586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.404986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.404994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.405005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.405014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.405025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.405034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.405045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.405053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.405064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.405073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.405084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.405093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.405104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.405113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.405123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.405132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.540 [2024-07-15 22:37:41.405143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.540 [2024-07-15 22:37:41.405152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.405878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.405887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.407243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.407259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.407272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.407282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.407293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.407302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.407314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.407325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.407337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.407346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.541 [2024-07-15 22:37:41.407357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.541 [2024-07-15 22:37:41.407366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.407982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.407993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.408001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.408012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.408021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.408032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.408041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.408053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.408062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.408073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.408084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.408095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.408104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.408115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.408124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.408135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.408144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.408155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.408163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.408175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.408184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.408194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.408203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.542 [2024-07-15 22:37:41.408214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.542 [2024-07-15 22:37:41.408229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.408522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.408531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.543 [2024-07-15 22:37:41.410746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.543 [2024-07-15 22:37:41.410757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.410769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.410780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.410789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.410799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.410808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.410819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.410828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.410839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.410848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.410859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.410868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.410879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.410887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.410898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.410907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.410918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.410927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.410938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.410947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.410957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.410966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.410977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.410986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.410997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.411479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.411487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.544 [2024-07-15 22:37:41.412565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.544 [2024-07-15 22:37:41.412578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.412986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.412992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.545 [2024-07-15 22:37:41.413201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.545 [2024-07-15 22:37:41.413209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.413514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.413522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.414993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.414999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.415007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.415013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.415022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.415028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.415036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.415043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.415052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.415058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.415067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.425862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.425883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.425892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.546 [2024-07-15 22:37:41.425903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.546 [2024-07-15 22:37:41.425911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.425922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.425930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.425940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.425949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.425962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.425971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.425981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.425994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.547 [2024-07-15 22:37:41.426745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.547 [2024-07-15 22:37:41.426755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.426764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.548 [2024-07-15 22:37:41.428866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.548 [2024-07-15 22:37:41.428876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.428885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.428897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.428906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.428917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.428925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.428935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.428944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.428954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.428963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.428973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.428982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.428992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.429294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.549 [2024-07-15 22:37:41.429302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.549 [2024-07-15 22:37:41.430853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:17.549 [2024-07-15 22:37:41.430880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:17.549 [2024-07-15 22:37:41.430893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:17.549 [2024-07-15 22:37:41.430904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:17.549 [2024-07-15 22:37:41.430915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:17.549 [2024-07-15 22:37:41.431270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.549 [2024-07-15 22:37:41.431288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2155050 with addr=10.0.0.2, port=4420 00:21:17.549 [2024-07-15 22:37:41.431299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2155050 is same with the state(5) to be set 00:21:17.549 [2024-07-15 22:37:41.431331] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.549 [2024-07-15 22:37:41.431345] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.549 [2024-07-15 22:37:41.431361] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.549 [2024-07-15 22:37:41.431376] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.549 [2024-07-15 22:37:41.431396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2155050 (9): Bad file descriptor 00:21:17.549 [2024-07-15 22:37:41.431511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:17.549 [2024-07-15 22:37:41.431526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:17.549 [2024-07-15 22:37:41.431538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:17.549 task offset: 24576 on job bdev=Nvme9n1 fails 00:21:17.549 00:21:17.549 Latency(us) 00:21:17.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.549 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.549 Job: Nvme1n1 ended in about 0.94 seconds with error 00:21:17.549 Verification LBA range: start 0x0 length 0x400 00:21:17.549 Nvme1n1 : 0.94 203.53 12.72 67.84 0.00 233527.43 15956.59 214274.23 00:21:17.549 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.549 Job: Nvme2n1 ended in about 0.95 seconds with error 00:21:17.549 Verification LBA range: start 0x0 length 0x400 00:21:17.549 Nvme2n1 : 0.95 207.20 12.95 67.66 0.00 226770.34 19603.81 200597.15 00:21:17.549 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.549 Job: Nvme3n1 ended in about 0.94 seconds with error 00:21:17.549 Verification LBA range: start 0x0 length 0x400 00:21:17.549 Nvme3n1 : 0.94 272.85 17.05 68.21 0.00 179512.90 15614.66 214274.23 00:21:17.549 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.549 Job: Nvme4n1 ended in about 0.95 seconds with error 00:21:17.549 Verification LBA range: start 0x0 length 0x400 00:21:17.549 Nvme4n1 : 0.95 202.34 12.65 67.45 0.00 223164.10 15842.62 215186.03 00:21:17.549 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.549 Job: Nvme5n1 ended in about 0.95 seconds with error 00:21:17.549 Verification LBA range: start 0x0 length 0x400 00:21:17.549 Nvme5n1 : 0.95 201.92 12.62 67.31 0.00 219702.32 16412.49 213362.42 00:21:17.549 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.549 Job: Nvme6n1 ended in about 0.94 seconds with error 00:21:17.549 Verification LBA range: start 0x0 length 0x400 00:21:17.549 Nvme6n1 : 0.94 208.60 13.04 68.12 0.00 209667.42 5442.34 220656.86 00:21:17.549 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.550 Job: Nvme7n1 ended in about 0.96 seconds with error 00:21:17.550 Verification LBA range: start 0x0 length 0x400 00:21:17.550 Nvme7n1 : 0.96 199.14 12.45 66.38 0.00 215122.37 18805.98 230686.72 00:21:17.550 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.550 Job: Nvme8n1 ended in about 0.97 seconds with error 00:21:17.550 Verification LBA range: start 0x0 length 0x400 00:21:17.550 Nvme8n1 : 0.97 198.62 12.41 66.21 0.00 211834.55 14360.93 217009.64 00:21:17.550 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.550 Job: Nvme9n1 ended in about 0.94 seconds with error 00:21:17.550 Verification LBA range: start 0x0 length 0x400 00:21:17.550 Nvme9n1 : 0.94 205.09 12.82 68.36 0.00 200372.31 33964.74 223392.28 00:21:17.550 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:17.550 Job: Nvme10n1 ended in about 0.94 seconds with error 00:21:17.550 Verification LBA range: start 0x0 length 0x400 00:21:17.550 Nvme10n1 : 0.94 204.89 12.81 68.30 0.00 196723.98 19261.89 240716.58 00:21:17.550 =================================================================================================================== 00:21:17.550 Total : 2104.20 131.51 675.83 0.00 210878.57 5442.34 240716.58 00:21:17.550 [2024-07-15 22:37:41.459024] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:17.550 [2024-07-15 22:37:41.459066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:17.550 [2024-07-15 22:37:41.459402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.550 [2024-07-15 22:37:41.459422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc7d60 with addr=10.0.0.2, port=4420 00:21:17.550 [2024-07-15 22:37:41.459433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc7d60 is same with the state(5) to be set 00:21:17.550 [2024-07-15 22:37:41.459578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.550 [2024-07-15 22:37:41.459590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2135890 with addr=10.0.0.2, port=4420 00:21:17.550 [2024-07-15 22:37:41.459598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2135890 is same with the state(5) to be set 00:21:17.550 [2024-07-15 22:37:41.459813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.550 [2024-07-15 22:37:41.459825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fc4b30 with addr=10.0.0.2, port=4420 00:21:17.550 [2024-07-15 22:37:41.459832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc4b30 is same with the state(5) to be set 00:21:17.550 [2024-07-15 22:37:41.460030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.550 [2024-07-15 22:37:41.460041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f80c70 with addr=10.0.0.2, port=4420 00:21:17.550 [2024-07-15 22:37:41.460054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f80c70 is same with the state(5) to be set 00:21:17.550 [2024-07-15 22:37:41.460238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.550 [2024-07-15 22:37:41.460250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x214c8d0 with addr=10.0.0.2, port=4420 00:21:17.550 [2024-07-15 22:37:41.460257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214c8d0 is same with the state(5) to be set 00:21:17.550 [2024-07-15 22:37:41.461874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.550 [2024-07-15 22:37:41.461897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fbd1d0 with addr=10.0.0.2, port=4420 00:21:17.550 [2024-07-15 22:37:41.461905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fbd1d0 is same with the state(5) to be set 00:21:17.550 [2024-07-15 22:37:41.462172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.550 [2024-07-15 22:37:41.462184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1acf340 with addr=10.0.0.2, port=4420 00:21:17.550 [2024-07-15 22:37:41.462191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acf340 is same with the state(5) to be set 00:21:17.550 [2024-07-15 22:37:41.462461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.550 [2024-07-15 22:37:41.462473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa3190 with addr=10.0.0.2, port=4420 00:21:17.550 [2024-07-15 22:37:41.462481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3190 is same with the state(5) to be set 00:21:17.550 [2024-07-15 22:37:41.462627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.550 [2024-07-15 22:37:41.462640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x214b660 with addr=10.0.0.2, port=4420 00:21:17.550 [2024-07-15 22:37:41.462647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214b660 is same with the state(5) to be set 00:21:17.550 [2024-07-15 22:37:41.462660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc7d60 (9): Bad file descriptor 00:21:17.550 [2024-07-15 22:37:41.462672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2135890 (9): Bad file descriptor 00:21:17.550 [2024-07-15 22:37:41.462681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc4b30 (9): Bad file descriptor 00:21:17.550 [2024-07-15 22:37:41.462690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f80c70 (9): Bad file descriptor 00:21:17.550 [2024-07-15 22:37:41.462699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214c8d0 (9): Bad file descriptor 00:21:17.550 [2024-07-15 22:37:41.462708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:17.550 [2024-07-15 22:37:41.462715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:17.550 [2024-07-15 22:37:41.462724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:17.550 [2024-07-15 22:37:41.462761] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.550 [2024-07-15 22:37:41.462773] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.550 [2024-07-15 22:37:41.462785] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.550 [2024-07-15 22:37:41.462794] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.550 [2024-07-15 22:37:41.462805] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.550 [2024-07-15 22:37:41.462818] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:17.550 [2024-07-15 22:37:41.462889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:17.550 [2024-07-15 22:37:41.462901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fbd1d0 (9): Bad file descriptor 00:21:17.550 [2024-07-15 22:37:41.462909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acf340 (9): Bad file descriptor 00:21:17.550 [2024-07-15 22:37:41.462918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa3190 (9): Bad file descriptor 00:21:17.550 [2024-07-15 22:37:41.462927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214b660 (9): Bad file descriptor 00:21:17.550 [2024-07-15 22:37:41.462934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:17.550 [2024-07-15 22:37:41.462940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:17.550 [2024-07-15 22:37:41.462947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:17.550 [2024-07-15 22:37:41.462957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:17.550 [2024-07-15 22:37:41.462963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:17.550 [2024-07-15 22:37:41.462969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:17.550 [2024-07-15 22:37:41.462980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:17.550 [2024-07-15 22:37:41.462986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:17.550 [2024-07-15 22:37:41.462993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:17.550 [2024-07-15 22:37:41.463004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:17.550 [2024-07-15 22:37:41.463011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:17.550 [2024-07-15 22:37:41.463018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:17.550 [2024-07-15 22:37:41.463028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:17.550 [2024-07-15 22:37:41.463034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:17.550 [2024-07-15 22:37:41.463041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:17.550 [2024-07-15 22:37:41.463103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:17.550 [2024-07-15 22:37:41.463112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:17.550 [2024-07-15 22:37:41.463118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:17.550 [2024-07-15 22:37:41.463124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:17.550 [2024-07-15 22:37:41.463129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:17.550 [2024-07-15 22:37:41.463136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:17.550 [2024-07-15 22:37:41.463142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:17.550 [2024-07-15 22:37:41.463148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:17.550 [2024-07-15 22:37:41.463157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:17.550 [2024-07-15 22:37:41.463166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:17.550 [2024-07-15 22:37:41.463173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:17.550 [2024-07-15 22:37:41.463181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:17.550 [2024-07-15 22:37:41.463187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:17.550 [2024-07-15 22:37:41.463196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:17.550 [2024-07-15 22:37:41.463205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:17.550 [2024-07-15 22:37:41.463211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:17.550 [2024-07-15 22:37:41.463218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:17.550 [2024-07-15 22:37:41.463247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:17.550 [2024-07-15 22:37:41.463257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:17.550 [2024-07-15 22:37:41.463262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:17.550 [2024-07-15 22:37:41.463268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:17.809 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:17.809 22:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 66611 00:21:19.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (66611) - No such process 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.186 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.186 rmmod nvme_tcp 00:21:19.187 rmmod nvme_fabrics 00:21:19.187 rmmod nvme_keyring 00:21:19.187 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.187 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:19.187 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:19.187 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:19.187 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.187 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.187 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.187 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.187 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.187 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.187 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.187 22:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.093 22:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:21.093 00:21:21.093 real 0m7.721s 00:21:21.093 user 0m18.771s 00:21:21.093 sys 0m1.287s 00:21:21.093 22:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:21.093 22:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.093 ************************************ 00:21:21.093 END TEST nvmf_shutdown_tc3 00:21:21.093 ************************************ 00:21:21.093 22:37:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:21.093 22:37:44 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:21.093 00:21:21.093 real 0m30.928s 00:21:21.093 user 1m18.690s 00:21:21.093 sys 0m7.941s 00:21:21.093 22:37:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:21.093 22:37:44 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:21.093 ************************************ 00:21:21.093 END TEST nvmf_shutdown 00:21:21.093 ************************************ 00:21:21.093 22:37:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:21.093 22:37:44 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:21.093 22:37:44 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.093 22:37:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.093 22:37:45 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:21.093 22:37:45 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.093 22:37:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.093 22:37:45 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:21.093 22:37:45 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:21.093 22:37:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:21.093 22:37:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:21.093 22:37:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:21.093 ************************************ 00:21:21.093 START TEST nvmf_multicontroller 00:21:21.093 ************************************ 00:21:21.093 22:37:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:21.353 * Looking for test storage... 00:21:21.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:21.353 22:37:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:26.624 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:26.624 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:26.624 Found net devices under 0000:86:00.0: cvl_0_0 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:26.624 Found net devices under 0000:86:00.1: cvl_0_1 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:26.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:21:26.624 00:21:26.624 --- 10.0.0.2 ping statistics --- 00:21:26.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.624 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:21:26.624 00:21:26.624 --- 10.0.0.1 ping statistics --- 00:21:26.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.624 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:26.624 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=70670 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 70670 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 70670 ']' 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.625 22:37:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:26.625 [2024-07-15 22:37:50.426575] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:26.625 [2024-07-15 22:37:50.426615] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.625 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.625 [2024-07-15 22:37:50.482766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:26.625 [2024-07-15 22:37:50.561221] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.625 [2024-07-15 22:37:50.561264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.625 [2024-07-15 22:37:50.561270] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.625 [2024-07-15 22:37:50.561276] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.625 [2024-07-15 22:37:50.561281] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.625 [2024-07-15 22:37:50.561388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.625 [2024-07-15 22:37:50.561451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.625 [2024-07-15 22:37:50.561453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.581 [2024-07-15 22:37:51.297985] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.581 Malloc0 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.581 [2024-07-15 22:37:51.363520] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.581 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 [2024-07-15 22:37:51.371458] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 Malloc1 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=70896 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 70896 /var/tmp/bdevperf.sock 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 70896 ']' 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.582 22:37:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.519 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:28.519 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:28.519 22:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:28.519 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.519 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.779 NVMe0n1 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.779 1 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.779 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.779 request: 00:21:28.779 { 00:21:28.779 "name": "NVMe0", 00:21:28.780 "trtype": "tcp", 00:21:28.780 "traddr": "10.0.0.2", 00:21:28.780 "adrfam": "ipv4", 00:21:28.780 "trsvcid": "4420", 00:21:28.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.780 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:28.780 "hostaddr": "10.0.0.2", 00:21:28.780 "hostsvcid": "60000", 00:21:28.780 "prchk_reftag": false, 00:21:28.780 "prchk_guard": false, 00:21:28.780 "hdgst": false, 00:21:28.780 "ddgst": false, 00:21:28.780 "method": "bdev_nvme_attach_controller", 00:21:28.780 "req_id": 1 00:21:28.780 } 00:21:28.780 Got JSON-RPC error response 00:21:28.780 response: 00:21:28.780 { 00:21:28.780 "code": -114, 00:21:28.780 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:28.780 } 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.780 request: 00:21:28.780 { 00:21:28.780 "name": "NVMe0", 00:21:28.780 "trtype": "tcp", 00:21:28.780 "traddr": "10.0.0.2", 00:21:28.780 "adrfam": "ipv4", 00:21:28.780 "trsvcid": "4420", 00:21:28.780 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:28.780 "hostaddr": "10.0.0.2", 00:21:28.780 "hostsvcid": "60000", 00:21:28.780 "prchk_reftag": false, 00:21:28.780 "prchk_guard": false, 00:21:28.780 "hdgst": false, 00:21:28.780 "ddgst": false, 00:21:28.780 "method": "bdev_nvme_attach_controller", 00:21:28.780 "req_id": 1 00:21:28.780 } 00:21:28.780 Got JSON-RPC error response 00:21:28.780 response: 00:21:28.780 { 00:21:28.780 "code": -114, 00:21:28.780 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:28.780 } 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.780 request: 00:21:28.780 { 00:21:28.780 "name": "NVMe0", 00:21:28.780 "trtype": "tcp", 00:21:28.780 "traddr": "10.0.0.2", 00:21:28.780 "adrfam": "ipv4", 00:21:28.780 "trsvcid": "4420", 00:21:28.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.780 "hostaddr": "10.0.0.2", 00:21:28.780 "hostsvcid": "60000", 00:21:28.780 "prchk_reftag": false, 00:21:28.780 "prchk_guard": false, 00:21:28.780 "hdgst": false, 00:21:28.780 "ddgst": false, 00:21:28.780 "multipath": "disable", 00:21:28.780 "method": "bdev_nvme_attach_controller", 00:21:28.780 "req_id": 1 00:21:28.780 } 00:21:28.780 Got JSON-RPC error response 00:21:28.780 response: 00:21:28.780 { 00:21:28.780 "code": -114, 00:21:28.780 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:28.780 } 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:28.780 request: 00:21:28.780 { 00:21:28.780 "name": "NVMe0", 00:21:28.780 "trtype": "tcp", 00:21:28.780 "traddr": "10.0.0.2", 00:21:28.780 "adrfam": "ipv4", 00:21:28.780 "trsvcid": "4420", 00:21:28.780 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.780 "hostaddr": "10.0.0.2", 00:21:28.780 "hostsvcid": "60000", 00:21:28.780 "prchk_reftag": false, 00:21:28.780 "prchk_guard": false, 00:21:28.780 "hdgst": false, 00:21:28.780 "ddgst": false, 00:21:28.780 "multipath": "failover", 00:21:28.780 "method": "bdev_nvme_attach_controller", 00:21:28.780 "req_id": 1 00:21:28.780 } 00:21:28.780 Got JSON-RPC error response 00:21:28.780 response: 00:21:28.780 { 00:21:28.780 "code": -114, 00:21:28.780 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:28.780 } 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.780 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.040 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.040 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:29.040 22:37:52 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.040 22:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:29.040 22:37:53 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:30.416 0 00:21:30.416 22:37:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:30.416 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.416 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.416 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.416 22:37:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 70896 00:21:30.416 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 70896 ']' 00:21:30.416 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 70896 00:21:30.416 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:30.416 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70896 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70896' 00:21:30.417 killing process with pid 70896 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 70896 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 70896 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:30.417 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:30.676 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:30.676 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:30.677 [2024-07-15 22:37:51.472734] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:30.677 [2024-07-15 22:37:51.472784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70896 ] 00:21:30.677 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.677 [2024-07-15 22:37:51.526512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.677 [2024-07-15 22:37:51.607409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.677 [2024-07-15 22:37:52.981310] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 880fc549-b5a7-4e4c-b24f-6876d3181ef4 already exists 00:21:30.677 [2024-07-15 22:37:52.981340] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:880fc549-b5a7-4e4c-b24f-6876d3181ef4 alias for bdev NVMe1n1 00:21:30.677 [2024-07-15 22:37:52.981348] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:30.677 Running I/O for 1 seconds... 00:21:30.677 00:21:30.677 Latency(us) 00:21:30.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.677 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:30.677 NVMe0n1 : 1.00 24307.04 94.95 0.00 0.00 5254.22 2991.86 10770.70 00:21:30.677 =================================================================================================================== 00:21:30.677 Total : 24307.04 94.95 0.00 0.00 5254.22 2991.86 10770.70 00:21:30.677 Received shutdown signal, test time was about 1.000000 seconds 00:21:30.677 00:21:30.677 Latency(us) 00:21:30.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.677 =================================================================================================================== 00:21:30.677 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.677 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.677 rmmod nvme_tcp 00:21:30.677 rmmod nvme_fabrics 00:21:30.677 rmmod nvme_keyring 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 70670 ']' 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 70670 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 70670 ']' 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 70670 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70670 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70670' 00:21:30.677 killing process with pid 70670 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 70670 00:21:30.677 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 70670 00:21:30.936 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:30.936 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:30.937 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:30.937 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:30.937 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:30.937 22:37:54 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.937 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:30.937 22:37:54 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.841 22:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:32.841 00:21:32.841 real 0m11.726s 00:21:32.841 user 0m17.128s 00:21:32.841 sys 0m4.631s 00:21:32.841 22:37:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:32.841 22:37:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:32.842 ************************************ 00:21:32.842 END TEST nvmf_multicontroller 00:21:32.842 ************************************ 00:21:33.102 22:37:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:33.102 22:37:56 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:33.102 22:37:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:33.102 22:37:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.102 22:37:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:33.102 ************************************ 00:21:33.102 START TEST nvmf_aer 00:21:33.102 ************************************ 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:33.102 * Looking for test storage... 00:21:33.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:33.102 22:37:56 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:38.431 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:38.432 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:38.432 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:38.432 Found net devices under 0000:86:00.0: cvl_0_0 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:38.432 Found net devices under 0000:86:00.1: cvl_0_1 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:38.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:21:38.432 00:21:38.432 --- 10.0.0.2 ping statistics --- 00:21:38.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.432 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:21:38.432 00:21:38.432 --- 10.0.0.1 ping statistics --- 00:21:38.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.432 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=74692 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 74692 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 74692 ']' 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.432 22:38:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.432 [2024-07-15 22:38:02.346153] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:38.432 [2024-07-15 22:38:02.346194] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.432 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.692 [2024-07-15 22:38:02.408156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.692 [2024-07-15 22:38:02.482907] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.692 [2024-07-15 22:38:02.482950] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.692 [2024-07-15 22:38:02.482958] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.692 [2024-07-15 22:38:02.482964] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.692 [2024-07-15 22:38:02.482969] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.692 [2024-07-15 22:38:02.483015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.692 [2024-07-15 22:38:02.483111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.692 [2024-07-15 22:38:02.483197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.692 [2024-07-15 22:38:02.483198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.261 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.261 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.262 [2024-07-15 22:38:03.202154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.262 Malloc0 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.262 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.520 [2024-07-15 22:38:03.254100] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.520 [ 00:21:39.520 { 00:21:39.520 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:39.520 "subtype": "Discovery", 00:21:39.520 "listen_addresses": [], 00:21:39.520 "allow_any_host": true, 00:21:39.520 "hosts": [] 00:21:39.520 }, 00:21:39.520 { 00:21:39.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.520 "subtype": "NVMe", 00:21:39.520 "listen_addresses": [ 00:21:39.520 { 00:21:39.520 "trtype": "TCP", 00:21:39.520 "adrfam": "IPv4", 00:21:39.520 "traddr": "10.0.0.2", 00:21:39.520 "trsvcid": "4420" 00:21:39.520 } 00:21:39.520 ], 00:21:39.520 "allow_any_host": true, 00:21:39.520 "hosts": [], 00:21:39.520 "serial_number": "SPDK00000000000001", 00:21:39.520 "model_number": "SPDK bdev Controller", 00:21:39.520 "max_namespaces": 2, 00:21:39.520 "min_cntlid": 1, 00:21:39.520 "max_cntlid": 65519, 00:21:39.520 "namespaces": [ 00:21:39.520 { 00:21:39.520 "nsid": 1, 00:21:39.520 "bdev_name": "Malloc0", 00:21:39.520 "name": "Malloc0", 00:21:39.520 "nguid": "CCBB47ACFF75481D868EA8FB5A49F8FD", 00:21:39.520 "uuid": "ccbb47ac-ff75-481d-868e-a8fb5a49f8fd" 00:21:39.520 } 00:21:39.520 ] 00:21:39.520 } 00:21:39.520 ] 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=74938 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:39.520 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:21:39.520 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.779 Malloc1 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.779 Asynchronous Event Request test 00:21:39.779 Attaching to 10.0.0.2 00:21:39.779 Attached to 10.0.0.2 00:21:39.779 Registering asynchronous event callbacks... 00:21:39.779 Starting namespace attribute notice tests for all controllers... 00:21:39.779 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:39.779 aer_cb - Changed Namespace 00:21:39.779 Cleaning up... 00:21:39.779 [ 00:21:39.779 { 00:21:39.779 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:39.779 "subtype": "Discovery", 00:21:39.779 "listen_addresses": [], 00:21:39.779 "allow_any_host": true, 00:21:39.779 "hosts": [] 00:21:39.779 }, 00:21:39.779 { 00:21:39.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.779 "subtype": "NVMe", 00:21:39.779 "listen_addresses": [ 00:21:39.779 { 00:21:39.779 "trtype": "TCP", 00:21:39.779 "adrfam": "IPv4", 00:21:39.779 "traddr": "10.0.0.2", 00:21:39.779 "trsvcid": "4420" 00:21:39.779 } 00:21:39.779 ], 00:21:39.779 "allow_any_host": true, 00:21:39.779 "hosts": [], 00:21:39.779 "serial_number": "SPDK00000000000001", 00:21:39.779 "model_number": "SPDK bdev Controller", 00:21:39.779 "max_namespaces": 2, 00:21:39.779 "min_cntlid": 1, 00:21:39.779 "max_cntlid": 65519, 00:21:39.779 "namespaces": [ 00:21:39.779 { 00:21:39.779 "nsid": 1, 00:21:39.779 "bdev_name": "Malloc0", 00:21:39.779 "name": "Malloc0", 00:21:39.779 "nguid": "CCBB47ACFF75481D868EA8FB5A49F8FD", 00:21:39.779 "uuid": "ccbb47ac-ff75-481d-868e-a8fb5a49f8fd" 00:21:39.779 }, 00:21:39.779 { 00:21:39.779 "nsid": 2, 00:21:39.779 "bdev_name": "Malloc1", 00:21:39.779 "name": "Malloc1", 00:21:39.779 "nguid": "7520973B9217407D8F9B853D7DCFB8F4", 00:21:39.779 "uuid": "7520973b-9217-407d-8f9b-853d7dcfb8f4" 00:21:39.779 } 00:21:39.779 ] 00:21:39.779 } 00:21:39.779 ] 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 74938 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:39.779 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:39.779 rmmod nvme_tcp 00:21:40.038 rmmod nvme_fabrics 00:21:40.038 rmmod nvme_keyring 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 74692 ']' 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 74692 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 74692 ']' 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 74692 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74692 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74692' 00:21:40.038 killing process with pid 74692 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 74692 00:21:40.038 22:38:03 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 74692 00:21:40.298 22:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.298 22:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.298 22:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.298 22:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.298 22:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.298 22:38:04 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.298 22:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.298 22:38:04 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.203 22:38:06 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.203 00:21:42.203 real 0m9.225s 00:21:42.203 user 0m7.580s 00:21:42.203 sys 0m4.397s 00:21:42.203 22:38:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:42.203 22:38:06 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:42.203 ************************************ 00:21:42.203 END TEST nvmf_aer 00:21:42.203 ************************************ 00:21:42.203 22:38:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:42.203 22:38:06 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:42.203 22:38:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:42.203 22:38:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.203 22:38:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:42.203 ************************************ 00:21:42.203 START TEST nvmf_async_init 00:21:42.203 ************************************ 00:21:42.203 22:38:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:42.462 * Looking for test storage... 00:21:42.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.462 22:38:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=85fe5c4a8f134f43aa0921cca497ee76 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.463 22:38:06 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:47.739 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:47.739 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:47.739 Found net devices under 0000:86:00.0: cvl_0_0 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:47.739 Found net devices under 0000:86:00.1: cvl_0_1 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:47.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:21:47.739 00:21:47.739 --- 10.0.0.2 ping statistics --- 00:21:47.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.739 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:21:47.739 00:21:47.739 --- 10.0.0.1 ping statistics --- 00:21:47.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.739 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=78462 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 78462 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 78462 ']' 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.739 22:38:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.740 22:38:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.740 22:38:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.740 22:38:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:47.740 [2024-07-15 22:38:11.666275] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:47.740 [2024-07-15 22:38:11.666316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.740 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.997 [2024-07-15 22:38:11.722347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.997 [2024-07-15 22:38:11.801064] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.997 [2024-07-15 22:38:11.801101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.997 [2024-07-15 22:38:11.801109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.997 [2024-07-15 22:38:11.801115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.997 [2024-07-15 22:38:11.801124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.997 [2024-07-15 22:38:11.801143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.564 [2024-07-15 22:38:12.515799] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.564 null0 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.564 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 85fe5c4a8f134f43aa0921cca497ee76 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.823 [2024-07-15 22:38:12.555982] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:48.823 nvme0n1 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.823 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.082 [ 00:21:49.082 { 00:21:49.082 "name": "nvme0n1", 00:21:49.082 "aliases": [ 00:21:49.082 "85fe5c4a-8f13-4f43-aa09-21cca497ee76" 00:21:49.082 ], 00:21:49.082 "product_name": "NVMe disk", 00:21:49.082 "block_size": 512, 00:21:49.082 "num_blocks": 2097152, 00:21:49.082 "uuid": "85fe5c4a-8f13-4f43-aa09-21cca497ee76", 00:21:49.082 "assigned_rate_limits": { 00:21:49.082 "rw_ios_per_sec": 0, 00:21:49.082 "rw_mbytes_per_sec": 0, 00:21:49.082 "r_mbytes_per_sec": 0, 00:21:49.082 "w_mbytes_per_sec": 0 00:21:49.082 }, 00:21:49.082 "claimed": false, 00:21:49.082 "zoned": false, 00:21:49.082 "supported_io_types": { 00:21:49.082 "read": true, 00:21:49.082 "write": true, 00:21:49.082 "unmap": false, 00:21:49.082 "flush": true, 00:21:49.082 "reset": true, 00:21:49.082 "nvme_admin": true, 00:21:49.082 "nvme_io": true, 00:21:49.082 "nvme_io_md": false, 00:21:49.082 "write_zeroes": true, 00:21:49.082 "zcopy": false, 00:21:49.082 "get_zone_info": false, 00:21:49.082 "zone_management": false, 00:21:49.082 "zone_append": false, 00:21:49.082 "compare": true, 00:21:49.082 "compare_and_write": true, 00:21:49.082 "abort": true, 00:21:49.082 "seek_hole": false, 00:21:49.082 "seek_data": false, 00:21:49.082 "copy": true, 00:21:49.082 "nvme_iov_md": false 00:21:49.082 }, 00:21:49.082 "memory_domains": [ 00:21:49.082 { 00:21:49.082 "dma_device_id": "system", 00:21:49.082 "dma_device_type": 1 00:21:49.082 } 00:21:49.082 ], 00:21:49.082 "driver_specific": { 00:21:49.082 "nvme": [ 00:21:49.082 { 00:21:49.082 "trid": { 00:21:49.082 "trtype": "TCP", 00:21:49.082 "adrfam": "IPv4", 00:21:49.082 "traddr": "10.0.0.2", 00:21:49.082 "trsvcid": "4420", 00:21:49.082 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:49.082 }, 00:21:49.082 "ctrlr_data": { 00:21:49.082 "cntlid": 1, 00:21:49.082 "vendor_id": "0x8086", 00:21:49.082 "model_number": "SPDK bdev Controller", 00:21:49.082 "serial_number": "00000000000000000000", 00:21:49.082 "firmware_revision": "24.09", 00:21:49.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.082 "oacs": { 00:21:49.082 "security": 0, 00:21:49.082 "format": 0, 00:21:49.082 "firmware": 0, 00:21:49.082 "ns_manage": 0 00:21:49.082 }, 00:21:49.082 "multi_ctrlr": true, 00:21:49.082 "ana_reporting": false 00:21:49.082 }, 00:21:49.082 "vs": { 00:21:49.082 "nvme_version": "1.3" 00:21:49.082 }, 00:21:49.082 "ns_data": { 00:21:49.082 "id": 1, 00:21:49.082 "can_share": true 00:21:49.082 } 00:21:49.082 } 00:21:49.082 ], 00:21:49.082 "mp_policy": "active_passive" 00:21:49.082 } 00:21:49.082 } 00:21:49.082 ] 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.082 [2024-07-15 22:38:12.812567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:49.082 [2024-07-15 22:38:12.812620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x855250 (9): Bad file descriptor 00:21:49.082 [2024-07-15 22:38:12.944303] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.082 [ 00:21:49.082 { 00:21:49.082 "name": "nvme0n1", 00:21:49.082 "aliases": [ 00:21:49.082 "85fe5c4a-8f13-4f43-aa09-21cca497ee76" 00:21:49.082 ], 00:21:49.082 "product_name": "NVMe disk", 00:21:49.082 "block_size": 512, 00:21:49.082 "num_blocks": 2097152, 00:21:49.082 "uuid": "85fe5c4a-8f13-4f43-aa09-21cca497ee76", 00:21:49.082 "assigned_rate_limits": { 00:21:49.082 "rw_ios_per_sec": 0, 00:21:49.082 "rw_mbytes_per_sec": 0, 00:21:49.082 "r_mbytes_per_sec": 0, 00:21:49.082 "w_mbytes_per_sec": 0 00:21:49.082 }, 00:21:49.082 "claimed": false, 00:21:49.082 "zoned": false, 00:21:49.082 "supported_io_types": { 00:21:49.082 "read": true, 00:21:49.082 "write": true, 00:21:49.082 "unmap": false, 00:21:49.082 "flush": true, 00:21:49.082 "reset": true, 00:21:49.082 "nvme_admin": true, 00:21:49.082 "nvme_io": true, 00:21:49.082 "nvme_io_md": false, 00:21:49.082 "write_zeroes": true, 00:21:49.082 "zcopy": false, 00:21:49.082 "get_zone_info": false, 00:21:49.082 "zone_management": false, 00:21:49.082 "zone_append": false, 00:21:49.082 "compare": true, 00:21:49.082 "compare_and_write": true, 00:21:49.082 "abort": true, 00:21:49.082 "seek_hole": false, 00:21:49.082 "seek_data": false, 00:21:49.082 "copy": true, 00:21:49.082 "nvme_iov_md": false 00:21:49.082 }, 00:21:49.082 "memory_domains": [ 00:21:49.082 { 00:21:49.082 "dma_device_id": "system", 00:21:49.082 "dma_device_type": 1 00:21:49.082 } 00:21:49.082 ], 00:21:49.082 "driver_specific": { 00:21:49.082 "nvme": [ 00:21:49.082 { 00:21:49.082 "trid": { 00:21:49.082 "trtype": "TCP", 00:21:49.082 "adrfam": "IPv4", 00:21:49.082 "traddr": "10.0.0.2", 00:21:49.082 "trsvcid": "4420", 00:21:49.082 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:49.082 }, 00:21:49.082 "ctrlr_data": { 00:21:49.082 "cntlid": 2, 00:21:49.082 "vendor_id": "0x8086", 00:21:49.082 "model_number": "SPDK bdev Controller", 00:21:49.082 "serial_number": "00000000000000000000", 00:21:49.082 "firmware_revision": "24.09", 00:21:49.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.082 "oacs": { 00:21:49.082 "security": 0, 00:21:49.082 "format": 0, 00:21:49.082 "firmware": 0, 00:21:49.082 "ns_manage": 0 00:21:49.082 }, 00:21:49.082 "multi_ctrlr": true, 00:21:49.082 "ana_reporting": false 00:21:49.082 }, 00:21:49.082 "vs": { 00:21:49.082 "nvme_version": "1.3" 00:21:49.082 }, 00:21:49.082 "ns_data": { 00:21:49.082 "id": 1, 00:21:49.082 "can_share": true 00:21:49.082 } 00:21:49.082 } 00:21:49.082 ], 00:21:49.082 "mp_policy": "active_passive" 00:21:49.082 } 00:21:49.082 } 00:21:49.082 ] 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ItYnNXc4oc 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ItYnNXc4oc 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.082 22:38:12 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.082 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.082 22:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:49.082 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.082 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.082 [2024-07-15 22:38:13.005144] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.082 [2024-07-15 22:38:13.005244] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:49.082 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.082 22:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ItYnNXc4oc 00:21:49.082 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.082 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.082 [2024-07-15 22:38:13.013160] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:49.082 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.082 22:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ItYnNXc4oc 00:21:49.082 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.082 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.082 [2024-07-15 22:38:13.021195] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.082 [2024-07-15 22:38:13.021231] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:49.341 nvme0n1 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.341 [ 00:21:49.341 { 00:21:49.341 "name": "nvme0n1", 00:21:49.341 "aliases": [ 00:21:49.341 "85fe5c4a-8f13-4f43-aa09-21cca497ee76" 00:21:49.341 ], 00:21:49.341 "product_name": "NVMe disk", 00:21:49.341 "block_size": 512, 00:21:49.341 "num_blocks": 2097152, 00:21:49.341 "uuid": "85fe5c4a-8f13-4f43-aa09-21cca497ee76", 00:21:49.341 "assigned_rate_limits": { 00:21:49.341 "rw_ios_per_sec": 0, 00:21:49.341 "rw_mbytes_per_sec": 0, 00:21:49.341 "r_mbytes_per_sec": 0, 00:21:49.341 "w_mbytes_per_sec": 0 00:21:49.341 }, 00:21:49.341 "claimed": false, 00:21:49.341 "zoned": false, 00:21:49.341 "supported_io_types": { 00:21:49.341 "read": true, 00:21:49.341 "write": true, 00:21:49.341 "unmap": false, 00:21:49.341 "flush": true, 00:21:49.341 "reset": true, 00:21:49.341 "nvme_admin": true, 00:21:49.341 "nvme_io": true, 00:21:49.341 "nvme_io_md": false, 00:21:49.341 "write_zeroes": true, 00:21:49.341 "zcopy": false, 00:21:49.341 "get_zone_info": false, 00:21:49.341 "zone_management": false, 00:21:49.341 "zone_append": false, 00:21:49.341 "compare": true, 00:21:49.341 "compare_and_write": true, 00:21:49.341 "abort": true, 00:21:49.341 "seek_hole": false, 00:21:49.341 "seek_data": false, 00:21:49.341 "copy": true, 00:21:49.341 "nvme_iov_md": false 00:21:49.341 }, 00:21:49.341 "memory_domains": [ 00:21:49.341 { 00:21:49.341 "dma_device_id": "system", 00:21:49.341 "dma_device_type": 1 00:21:49.341 } 00:21:49.341 ], 00:21:49.341 "driver_specific": { 00:21:49.341 "nvme": [ 00:21:49.341 { 00:21:49.341 "trid": { 00:21:49.341 "trtype": "TCP", 00:21:49.341 "adrfam": "IPv4", 00:21:49.341 "traddr": "10.0.0.2", 00:21:49.341 "trsvcid": "4421", 00:21:49.341 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:49.341 }, 00:21:49.341 "ctrlr_data": { 00:21:49.341 "cntlid": 3, 00:21:49.341 "vendor_id": "0x8086", 00:21:49.341 "model_number": "SPDK bdev Controller", 00:21:49.341 "serial_number": "00000000000000000000", 00:21:49.341 "firmware_revision": "24.09", 00:21:49.341 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.341 "oacs": { 00:21:49.341 "security": 0, 00:21:49.341 "format": 0, 00:21:49.341 "firmware": 0, 00:21:49.341 "ns_manage": 0 00:21:49.341 }, 00:21:49.341 "multi_ctrlr": true, 00:21:49.341 "ana_reporting": false 00:21:49.341 }, 00:21:49.341 "vs": { 00:21:49.341 "nvme_version": "1.3" 00:21:49.341 }, 00:21:49.341 "ns_data": { 00:21:49.341 "id": 1, 00:21:49.341 "can_share": true 00:21:49.341 } 00:21:49.341 } 00:21:49.341 ], 00:21:49.341 "mp_policy": "active_passive" 00:21:49.341 } 00:21:49.341 } 00:21:49.341 ] 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ItYnNXc4oc 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:49.341 rmmod nvme_tcp 00:21:49.341 rmmod nvme_fabrics 00:21:49.341 rmmod nvme_keyring 00:21:49.341 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 78462 ']' 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 78462 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 78462 ']' 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 78462 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78462 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78462' 00:21:49.342 killing process with pid 78462 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 78462 00:21:49.342 [2024-07-15 22:38:13.231586] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:49.342 [2024-07-15 22:38:13.231610] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:49.342 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 78462 00:21:49.601 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:49.601 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:49.601 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:49.601 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:49.601 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:49.601 22:38:13 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.601 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.601 22:38:13 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.505 22:38:15 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:51.505 00:21:51.505 real 0m9.325s 00:21:51.505 user 0m3.482s 00:21:51.505 sys 0m4.390s 00:21:51.505 22:38:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:51.505 22:38:15 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:51.505 ************************************ 00:21:51.505 END TEST nvmf_async_init 00:21:51.505 ************************************ 00:21:51.764 22:38:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:51.764 22:38:15 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:51.764 22:38:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:51.764 22:38:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:51.764 22:38:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:51.764 ************************************ 00:21:51.764 START TEST dma 00:21:51.764 ************************************ 00:21:51.764 22:38:15 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:51.764 * Looking for test storage... 00:21:51.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:51.764 22:38:15 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:51.764 22:38:15 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.764 22:38:15 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.764 22:38:15 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.764 22:38:15 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.764 22:38:15 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.764 22:38:15 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.764 22:38:15 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:51.764 22:38:15 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:51.764 22:38:15 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:51.764 22:38:15 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:51.764 22:38:15 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:51.764 00:21:51.764 real 0m0.106s 00:21:51.764 user 0m0.059s 00:21:51.764 sys 0m0.055s 00:21:51.764 22:38:15 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:51.764 22:38:15 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:51.764 ************************************ 00:21:51.764 END TEST dma 00:21:51.764 ************************************ 00:21:51.764 22:38:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:51.764 22:38:15 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:51.764 22:38:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:51.764 22:38:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:51.764 22:38:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:51.764 ************************************ 00:21:51.764 START TEST nvmf_identify 00:21:51.764 ************************************ 00:21:51.764 22:38:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:52.024 * Looking for test storage... 00:21:52.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:52.024 22:38:15 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:57.300 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:57.300 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:57.300 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:57.301 Found net devices under 0000:86:00.0: cvl_0_0 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:57.301 Found net devices under 0000:86:00.1: cvl_0_1 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.301 22:38:20 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:57.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:21:57.301 00:21:57.301 --- 10.0.0.2 ping statistics --- 00:21:57.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.301 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:21:57.301 00:21:57.301 --- 10.0.0.1 ping statistics --- 00:21:57.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.301 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=82267 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 82267 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 82267 ']' 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.301 22:38:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:57.301 [2024-07-15 22:38:21.224723] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:57.301 [2024-07-15 22:38:21.224766] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.301 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.560 [2024-07-15 22:38:21.280843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.560 [2024-07-15 22:38:21.361174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.560 [2024-07-15 22:38:21.361222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.560 [2024-07-15 22:38:21.361233] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.560 [2024-07-15 22:38:21.361239] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.560 [2024-07-15 22:38:21.361244] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.560 [2024-07-15 22:38:21.361282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.560 [2024-07-15 22:38:21.361381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.560 [2024-07-15 22:38:21.361525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.560 [2024-07-15 22:38:21.361527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.160 [2024-07-15 22:38:22.045219] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.160 Malloc0 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.160 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.422 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.422 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.422 [2024-07-15 22:38:22.133386] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.422 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.422 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:58.422 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.422 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.422 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.422 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:58.422 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.422 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.422 [ 00:21:58.422 { 00:21:58.423 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:58.423 "subtype": "Discovery", 00:21:58.423 "listen_addresses": [ 00:21:58.423 { 00:21:58.423 "trtype": "TCP", 00:21:58.423 "adrfam": "IPv4", 00:21:58.423 "traddr": "10.0.0.2", 00:21:58.423 "trsvcid": "4420" 00:21:58.423 } 00:21:58.423 ], 00:21:58.423 "allow_any_host": true, 00:21:58.423 "hosts": [] 00:21:58.423 }, 00:21:58.423 { 00:21:58.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:58.423 "subtype": "NVMe", 00:21:58.423 "listen_addresses": [ 00:21:58.423 { 00:21:58.423 "trtype": "TCP", 00:21:58.423 "adrfam": "IPv4", 00:21:58.423 "traddr": "10.0.0.2", 00:21:58.423 "trsvcid": "4420" 00:21:58.423 } 00:21:58.423 ], 00:21:58.423 "allow_any_host": true, 00:21:58.423 "hosts": [], 00:21:58.423 "serial_number": "SPDK00000000000001", 00:21:58.423 "model_number": "SPDK bdev Controller", 00:21:58.423 "max_namespaces": 32, 00:21:58.423 "min_cntlid": 1, 00:21:58.423 "max_cntlid": 65519, 00:21:58.423 "namespaces": [ 00:21:58.423 { 00:21:58.423 "nsid": 1, 00:21:58.423 "bdev_name": "Malloc0", 00:21:58.423 "name": "Malloc0", 00:21:58.423 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:58.423 "eui64": "ABCDEF0123456789", 00:21:58.423 "uuid": "c9a3cb6e-793f-4dbb-b154-dbaa07a394a3" 00:21:58.423 } 00:21:58.423 ] 00:21:58.423 } 00:21:58.423 ] 00:21:58.423 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.423 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:58.423 [2024-07-15 22:38:22.186242] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:58.423 [2024-07-15 22:38:22.186289] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82344 ] 00:21:58.423 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.423 [2024-07-15 22:38:22.216796] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:58.423 [2024-07-15 22:38:22.216846] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:58.423 [2024-07-15 22:38:22.216851] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:58.423 [2024-07-15 22:38:22.216865] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:58.423 [2024-07-15 22:38:22.216870] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:58.423 [2024-07-15 22:38:22.217203] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:58.423 [2024-07-15 22:38:22.217240] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ac5ec0 0 00:21:58.423 [2024-07-15 22:38:22.231237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:58.423 [2024-07-15 22:38:22.231248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:58.423 [2024-07-15 22:38:22.231252] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:58.423 [2024-07-15 22:38:22.231255] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:58.423 [2024-07-15 22:38:22.231290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.231298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.231301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5ec0) 00:21:58.423 [2024-07-15 22:38:22.231314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:58.423 [2024-07-15 22:38:22.231329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b48e40, cid 0, qid 0 00:21:58.423 [2024-07-15 22:38:22.239236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.423 [2024-07-15 22:38:22.239245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.423 [2024-07-15 22:38:22.239249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.239253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b48e40) on tqpair=0x1ac5ec0 00:21:58.423 [2024-07-15 22:38:22.239263] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:58.423 [2024-07-15 22:38:22.239269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:58.423 [2024-07-15 22:38:22.239274] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:58.423 [2024-07-15 22:38:22.239288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.239291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.239295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5ec0) 00:21:58.423 [2024-07-15 22:38:22.239302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.423 [2024-07-15 22:38:22.239314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b48e40, cid 0, qid 0 00:21:58.423 [2024-07-15 22:38:22.239493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.423 [2024-07-15 22:38:22.239500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.423 [2024-07-15 22:38:22.239503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.239506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b48e40) on tqpair=0x1ac5ec0 00:21:58.423 [2024-07-15 22:38:22.239510] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:58.423 [2024-07-15 22:38:22.239518] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:58.423 [2024-07-15 22:38:22.239525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.239528] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.239531] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5ec0) 00:21:58.423 [2024-07-15 22:38:22.239538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.423 [2024-07-15 22:38:22.239548] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b48e40, cid 0, qid 0 00:21:58.423 [2024-07-15 22:38:22.239627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.423 [2024-07-15 22:38:22.239633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.423 [2024-07-15 22:38:22.239636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.239639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b48e40) on tqpair=0x1ac5ec0 00:21:58.423 [2024-07-15 22:38:22.239643] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:58.423 [2024-07-15 22:38:22.239650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:58.423 [2024-07-15 22:38:22.239656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.239660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.239666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5ec0) 00:21:58.423 [2024-07-15 22:38:22.239672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.423 [2024-07-15 22:38:22.239681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b48e40, cid 0, qid 0 00:21:58.423 [2024-07-15 22:38:22.239759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.423 [2024-07-15 22:38:22.239766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.423 [2024-07-15 22:38:22.239769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.239772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b48e40) on tqpair=0x1ac5ec0 00:21:58.423 [2024-07-15 22:38:22.239776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:58.423 [2024-07-15 22:38:22.239784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.239788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.423 [2024-07-15 22:38:22.239791] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5ec0) 00:21:58.424 [2024-07-15 22:38:22.239797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.424 [2024-07-15 22:38:22.239806] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b48e40, cid 0, qid 0 00:21:58.424 [2024-07-15 22:38:22.239882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.424 [2024-07-15 22:38:22.239888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.424 [2024-07-15 22:38:22.239891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.239894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b48e40) on tqpair=0x1ac5ec0 00:21:58.424 [2024-07-15 22:38:22.239899] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:58.424 [2024-07-15 22:38:22.239903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:58.424 [2024-07-15 22:38:22.239909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:58.424 [2024-07-15 22:38:22.240014] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:58.424 [2024-07-15 22:38:22.240019] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:58.424 [2024-07-15 22:38:22.240027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240030] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240033] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5ec0) 00:21:58.424 [2024-07-15 22:38:22.240039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.424 [2024-07-15 22:38:22.240049] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b48e40, cid 0, qid 0 00:21:58.424 [2024-07-15 22:38:22.240130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.424 [2024-07-15 22:38:22.240135] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.424 [2024-07-15 22:38:22.240139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b48e40) on tqpair=0x1ac5ec0 00:21:58.424 [2024-07-15 22:38:22.240146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:58.424 [2024-07-15 22:38:22.240155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5ec0) 00:21:58.424 [2024-07-15 22:38:22.240168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.424 [2024-07-15 22:38:22.240177] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b48e40, cid 0, qid 0 00:21:58.424 [2024-07-15 22:38:22.240260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.424 [2024-07-15 22:38:22.240266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.424 [2024-07-15 22:38:22.240269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b48e40) on tqpair=0x1ac5ec0 00:21:58.424 [2024-07-15 22:38:22.240276] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:58.424 [2024-07-15 22:38:22.240280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:58.424 [2024-07-15 22:38:22.240287] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:58.424 [2024-07-15 22:38:22.240295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:58.424 [2024-07-15 22:38:22.240304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5ec0) 00:21:58.424 [2024-07-15 22:38:22.240314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.424 [2024-07-15 22:38:22.240324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b48e40, cid 0, qid 0 00:21:58.424 [2024-07-15 22:38:22.240451] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.424 [2024-07-15 22:38:22.240457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.424 [2024-07-15 22:38:22.240460] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240464] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5ec0): datao=0, datal=4096, cccid=0 00:21:58.424 [2024-07-15 22:38:22.240468] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b48e40) on tqpair(0x1ac5ec0): expected_datao=0, payload_size=4096 00:21:58.424 [2024-07-15 22:38:22.240472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240479] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240483] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240544] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.424 [2024-07-15 22:38:22.240550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.424 [2024-07-15 22:38:22.240553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b48e40) on tqpair=0x1ac5ec0 00:21:58.424 [2024-07-15 22:38:22.240563] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:58.424 [2024-07-15 22:38:22.240570] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:58.424 [2024-07-15 22:38:22.240575] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:58.424 [2024-07-15 22:38:22.240579] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:58.424 [2024-07-15 22:38:22.240586] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:58.424 [2024-07-15 22:38:22.240590] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:58.424 [2024-07-15 22:38:22.240598] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:58.424 [2024-07-15 22:38:22.240605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5ec0) 00:21:58.424 [2024-07-15 22:38:22.240618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.424 [2024-07-15 22:38:22.240629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b48e40, cid 0, qid 0 00:21:58.424 [2024-07-15 22:38:22.240712] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.424 [2024-07-15 22:38:22.240718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.424 [2024-07-15 22:38:22.240721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b48e40) on tqpair=0x1ac5ec0 00:21:58.424 [2024-07-15 22:38:22.240732] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ac5ec0) 00:21:58.424 [2024-07-15 22:38:22.240744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.424 [2024-07-15 22:38:22.240749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ac5ec0) 00:21:58.424 [2024-07-15 22:38:22.240760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.424 [2024-07-15 22:38:22.240765] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ac5ec0) 00:21:58.424 [2024-07-15 22:38:22.240776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.424 [2024-07-15 22:38:22.240781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.424 [2024-07-15 22:38:22.240792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.424 [2024-07-15 22:38:22.240796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:58.424 [2024-07-15 22:38:22.240806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:58.424 [2024-07-15 22:38:22.240812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.424 [2024-07-15 22:38:22.240815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ac5ec0) 00:21:58.425 [2024-07-15 22:38:22.240821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.425 [2024-07-15 22:38:22.240834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b48e40, cid 0, qid 0 00:21:58.425 [2024-07-15 22:38:22.240838] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b48fc0, cid 1, qid 0 00:21:58.425 [2024-07-15 22:38:22.240842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b49140, cid 2, qid 0 00:21:58.425 [2024-07-15 22:38:22.240846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.425 [2024-07-15 22:38:22.240850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b49440, cid 4, qid 0 00:21:58.425 [2024-07-15 22:38:22.240963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.425 [2024-07-15 22:38:22.240968] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.425 [2024-07-15 22:38:22.240971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.240975] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b49440) on tqpair=0x1ac5ec0 00:21:58.425 [2024-07-15 22:38:22.240979] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:58.425 [2024-07-15 22:38:22.240983] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:58.425 [2024-07-15 22:38:22.240993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.240996] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ac5ec0) 00:21:58.425 [2024-07-15 22:38:22.241002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.425 [2024-07-15 22:38:22.241012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b49440, cid 4, qid 0 00:21:58.425 [2024-07-15 22:38:22.241116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.425 [2024-07-15 22:38:22.241122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.425 [2024-07-15 22:38:22.241125] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.241128] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5ec0): datao=0, datal=4096, cccid=4 00:21:58.425 [2024-07-15 22:38:22.241131] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b49440) on tqpair(0x1ac5ec0): expected_datao=0, payload_size=4096 00:21:58.425 [2024-07-15 22:38:22.241135] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.241141] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.241145] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.281408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.425 [2024-07-15 22:38:22.281422] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.425 [2024-07-15 22:38:22.281426] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.281430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b49440) on tqpair=0x1ac5ec0 00:21:58.425 [2024-07-15 22:38:22.281444] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:58.425 [2024-07-15 22:38:22.281468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.281472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ac5ec0) 00:21:58.425 [2024-07-15 22:38:22.281480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.425 [2024-07-15 22:38:22.281486] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.281489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.281492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ac5ec0) 00:21:58.425 [2024-07-15 22:38:22.281500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.425 [2024-07-15 22:38:22.281516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b49440, cid 4, qid 0 00:21:58.425 [2024-07-15 22:38:22.281521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b495c0, cid 5, qid 0 00:21:58.425 [2024-07-15 22:38:22.281629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.425 [2024-07-15 22:38:22.281635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.425 [2024-07-15 22:38:22.281638] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.281642] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5ec0): datao=0, datal=1024, cccid=4 00:21:58.425 [2024-07-15 22:38:22.281646] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b49440) on tqpair(0x1ac5ec0): expected_datao=0, payload_size=1024 00:21:58.425 [2024-07-15 22:38:22.281649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.281655] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.281659] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.281663] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.425 [2024-07-15 22:38:22.281668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.425 [2024-07-15 22:38:22.281671] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.281675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b495c0) on tqpair=0x1ac5ec0 00:21:58.425 [2024-07-15 22:38:22.322371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.425 [2024-07-15 22:38:22.322385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.425 [2024-07-15 22:38:22.322388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.322392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b49440) on tqpair=0x1ac5ec0 00:21:58.425 [2024-07-15 22:38:22.322410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.322414] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ac5ec0) 00:21:58.425 [2024-07-15 22:38:22.322422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.425 [2024-07-15 22:38:22.322438] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b49440, cid 4, qid 0 00:21:58.425 [2024-07-15 22:38:22.322528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.425 [2024-07-15 22:38:22.322534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.425 [2024-07-15 22:38:22.322537] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.322540] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5ec0): datao=0, datal=3072, cccid=4 00:21:58.425 [2024-07-15 22:38:22.322544] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b49440) on tqpair(0x1ac5ec0): expected_datao=0, payload_size=3072 00:21:58.425 [2024-07-15 22:38:22.322548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.322554] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.322557] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.322635] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.425 [2024-07-15 22:38:22.322640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.425 [2024-07-15 22:38:22.322643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.322646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b49440) on tqpair=0x1ac5ec0 00:21:58.425 [2024-07-15 22:38:22.322653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.322657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ac5ec0) 00:21:58.425 [2024-07-15 22:38:22.322666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.425 [2024-07-15 22:38:22.322680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b49440, cid 4, qid 0 00:21:58.425 [2024-07-15 22:38:22.322770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.425 [2024-07-15 22:38:22.322776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.425 [2024-07-15 22:38:22.322779] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.322782] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ac5ec0): datao=0, datal=8, cccid=4 00:21:58.425 [2024-07-15 22:38:22.322786] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b49440) on tqpair(0x1ac5ec0): expected_datao=0, payload_size=8 00:21:58.425 [2024-07-15 22:38:22.322790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.322795] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.322798] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.366915] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.425 [2024-07-15 22:38:22.366926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.425 [2024-07-15 22:38:22.366930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.425 [2024-07-15 22:38:22.366933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b49440) on tqpair=0x1ac5ec0 00:21:58.425 ===================================================== 00:21:58.425 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:58.425 ===================================================== 00:21:58.425 Controller Capabilities/Features 00:21:58.425 ================================ 00:21:58.426 Vendor ID: 0000 00:21:58.426 Subsystem Vendor ID: 0000 00:21:58.426 Serial Number: .................... 00:21:58.426 Model Number: ........................................ 00:21:58.426 Firmware Version: 24.09 00:21:58.426 Recommended Arb Burst: 0 00:21:58.426 IEEE OUI Identifier: 00 00 00 00:21:58.426 Multi-path I/O 00:21:58.426 May have multiple subsystem ports: No 00:21:58.426 May have multiple controllers: No 00:21:58.426 Associated with SR-IOV VF: No 00:21:58.426 Max Data Transfer Size: 131072 00:21:58.426 Max Number of Namespaces: 0 00:21:58.426 Max Number of I/O Queues: 1024 00:21:58.426 NVMe Specification Version (VS): 1.3 00:21:58.426 NVMe Specification Version (Identify): 1.3 00:21:58.426 Maximum Queue Entries: 128 00:21:58.426 Contiguous Queues Required: Yes 00:21:58.426 Arbitration Mechanisms Supported 00:21:58.426 Weighted Round Robin: Not Supported 00:21:58.426 Vendor Specific: Not Supported 00:21:58.426 Reset Timeout: 15000 ms 00:21:58.426 Doorbell Stride: 4 bytes 00:21:58.426 NVM Subsystem Reset: Not Supported 00:21:58.426 Command Sets Supported 00:21:58.426 NVM Command Set: Supported 00:21:58.426 Boot Partition: Not Supported 00:21:58.426 Memory Page Size Minimum: 4096 bytes 00:21:58.426 Memory Page Size Maximum: 4096 bytes 00:21:58.426 Persistent Memory Region: Not Supported 00:21:58.426 Optional Asynchronous Events Supported 00:21:58.426 Namespace Attribute Notices: Not Supported 00:21:58.426 Firmware Activation Notices: Not Supported 00:21:58.426 ANA Change Notices: Not Supported 00:21:58.426 PLE Aggregate Log Change Notices: Not Supported 00:21:58.426 LBA Status Info Alert Notices: Not Supported 00:21:58.426 EGE Aggregate Log Change Notices: Not Supported 00:21:58.426 Normal NVM Subsystem Shutdown event: Not Supported 00:21:58.426 Zone Descriptor Change Notices: Not Supported 00:21:58.426 Discovery Log Change Notices: Supported 00:21:58.426 Controller Attributes 00:21:58.426 128-bit Host Identifier: Not Supported 00:21:58.426 Non-Operational Permissive Mode: Not Supported 00:21:58.426 NVM Sets: Not Supported 00:21:58.426 Read Recovery Levels: Not Supported 00:21:58.426 Endurance Groups: Not Supported 00:21:58.426 Predictable Latency Mode: Not Supported 00:21:58.426 Traffic Based Keep ALive: Not Supported 00:21:58.426 Namespace Granularity: Not Supported 00:21:58.426 SQ Associations: Not Supported 00:21:58.426 UUID List: Not Supported 00:21:58.426 Multi-Domain Subsystem: Not Supported 00:21:58.426 Fixed Capacity Management: Not Supported 00:21:58.426 Variable Capacity Management: Not Supported 00:21:58.426 Delete Endurance Group: Not Supported 00:21:58.426 Delete NVM Set: Not Supported 00:21:58.426 Extended LBA Formats Supported: Not Supported 00:21:58.426 Flexible Data Placement Supported: Not Supported 00:21:58.426 00:21:58.426 Controller Memory Buffer Support 00:21:58.426 ================================ 00:21:58.426 Supported: No 00:21:58.426 00:21:58.426 Persistent Memory Region Support 00:21:58.426 ================================ 00:21:58.426 Supported: No 00:21:58.426 00:21:58.426 Admin Command Set Attributes 00:21:58.426 ============================ 00:21:58.426 Security Send/Receive: Not Supported 00:21:58.426 Format NVM: Not Supported 00:21:58.426 Firmware Activate/Download: Not Supported 00:21:58.426 Namespace Management: Not Supported 00:21:58.426 Device Self-Test: Not Supported 00:21:58.426 Directives: Not Supported 00:21:58.426 NVMe-MI: Not Supported 00:21:58.426 Virtualization Management: Not Supported 00:21:58.426 Doorbell Buffer Config: Not Supported 00:21:58.426 Get LBA Status Capability: Not Supported 00:21:58.426 Command & Feature Lockdown Capability: Not Supported 00:21:58.426 Abort Command Limit: 1 00:21:58.426 Async Event Request Limit: 4 00:21:58.426 Number of Firmware Slots: N/A 00:21:58.426 Firmware Slot 1 Read-Only: N/A 00:21:58.426 Firmware Activation Without Reset: N/A 00:21:58.426 Multiple Update Detection Support: N/A 00:21:58.426 Firmware Update Granularity: No Information Provided 00:21:58.426 Per-Namespace SMART Log: No 00:21:58.426 Asymmetric Namespace Access Log Page: Not Supported 00:21:58.426 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:58.426 Command Effects Log Page: Not Supported 00:21:58.426 Get Log Page Extended Data: Supported 00:21:58.426 Telemetry Log Pages: Not Supported 00:21:58.426 Persistent Event Log Pages: Not Supported 00:21:58.426 Supported Log Pages Log Page: May Support 00:21:58.426 Commands Supported & Effects Log Page: Not Supported 00:21:58.426 Feature Identifiers & Effects Log Page:May Support 00:21:58.426 NVMe-MI Commands & Effects Log Page: May Support 00:21:58.426 Data Area 4 for Telemetry Log: Not Supported 00:21:58.426 Error Log Page Entries Supported: 128 00:21:58.426 Keep Alive: Not Supported 00:21:58.426 00:21:58.426 NVM Command Set Attributes 00:21:58.426 ========================== 00:21:58.426 Submission Queue Entry Size 00:21:58.426 Max: 1 00:21:58.426 Min: 1 00:21:58.426 Completion Queue Entry Size 00:21:58.426 Max: 1 00:21:58.426 Min: 1 00:21:58.426 Number of Namespaces: 0 00:21:58.426 Compare Command: Not Supported 00:21:58.426 Write Uncorrectable Command: Not Supported 00:21:58.426 Dataset Management Command: Not Supported 00:21:58.426 Write Zeroes Command: Not Supported 00:21:58.426 Set Features Save Field: Not Supported 00:21:58.426 Reservations: Not Supported 00:21:58.426 Timestamp: Not Supported 00:21:58.426 Copy: Not Supported 00:21:58.426 Volatile Write Cache: Not Present 00:21:58.426 Atomic Write Unit (Normal): 1 00:21:58.426 Atomic Write Unit (PFail): 1 00:21:58.426 Atomic Compare & Write Unit: 1 00:21:58.426 Fused Compare & Write: Supported 00:21:58.426 Scatter-Gather List 00:21:58.426 SGL Command Set: Supported 00:21:58.426 SGL Keyed: Supported 00:21:58.426 SGL Bit Bucket Descriptor: Not Supported 00:21:58.426 SGL Metadata Pointer: Not Supported 00:21:58.426 Oversized SGL: Not Supported 00:21:58.426 SGL Metadata Address: Not Supported 00:21:58.426 SGL Offset: Supported 00:21:58.426 Transport SGL Data Block: Not Supported 00:21:58.426 Replay Protected Memory Block: Not Supported 00:21:58.426 00:21:58.426 Firmware Slot Information 00:21:58.426 ========================= 00:21:58.426 Active slot: 0 00:21:58.426 00:21:58.426 00:21:58.426 Error Log 00:21:58.426 ========= 00:21:58.426 00:21:58.426 Active Namespaces 00:21:58.426 ================= 00:21:58.426 Discovery Log Page 00:21:58.426 ================== 00:21:58.426 Generation Counter: 2 00:21:58.426 Number of Records: 2 00:21:58.426 Record Format: 0 00:21:58.426 00:21:58.426 Discovery Log Entry 0 00:21:58.426 ---------------------- 00:21:58.426 Transport Type: 3 (TCP) 00:21:58.426 Address Family: 1 (IPv4) 00:21:58.426 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:58.426 Entry Flags: 00:21:58.426 Duplicate Returned Information: 1 00:21:58.426 Explicit Persistent Connection Support for Discovery: 1 00:21:58.426 Transport Requirements: 00:21:58.427 Secure Channel: Not Required 00:21:58.427 Port ID: 0 (0x0000) 00:21:58.427 Controller ID: 65535 (0xffff) 00:21:58.427 Admin Max SQ Size: 128 00:21:58.427 Transport Service Identifier: 4420 00:21:58.427 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:58.427 Transport Address: 10.0.0.2 00:21:58.427 Discovery Log Entry 1 00:21:58.427 ---------------------- 00:21:58.427 Transport Type: 3 (TCP) 00:21:58.427 Address Family: 1 (IPv4) 00:21:58.427 Subsystem Type: 2 (NVM Subsystem) 00:21:58.427 Entry Flags: 00:21:58.427 Duplicate Returned Information: 0 00:21:58.427 Explicit Persistent Connection Support for Discovery: 0 00:21:58.427 Transport Requirements: 00:21:58.427 Secure Channel: Not Required 00:21:58.427 Port ID: 0 (0x0000) 00:21:58.427 Controller ID: 65535 (0xffff) 00:21:58.427 Admin Max SQ Size: 128 00:21:58.427 Transport Service Identifier: 4420 00:21:58.427 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:58.427 Transport Address: 10.0.0.2 [2024-07-15 22:38:22.367013] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:58.427 [2024-07-15 22:38:22.367023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b48e40) on tqpair=0x1ac5ec0 00:21:58.427 [2024-07-15 22:38:22.367030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.427 [2024-07-15 22:38:22.367035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b48fc0) on tqpair=0x1ac5ec0 00:21:58.427 [2024-07-15 22:38:22.367039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.427 [2024-07-15 22:38:22.367043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b49140) on tqpair=0x1ac5ec0 00:21:58.427 [2024-07-15 22:38:22.367047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.427 [2024-07-15 22:38:22.367051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.427 [2024-07-15 22:38:22.367055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.427 [2024-07-15 22:38:22.367065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.427 [2024-07-15 22:38:22.367078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.427 [2024-07-15 22:38:22.367091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.427 [2024-07-15 22:38:22.367203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.427 [2024-07-15 22:38:22.367210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.427 [2024-07-15 22:38:22.367213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.427 [2024-07-15 22:38:22.367222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367235] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.427 [2024-07-15 22:38:22.367243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.427 [2024-07-15 22:38:22.367257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.427 [2024-07-15 22:38:22.367371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.427 [2024-07-15 22:38:22.367377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.427 [2024-07-15 22:38:22.367380] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.427 [2024-07-15 22:38:22.367388] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:58.427 [2024-07-15 22:38:22.367392] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:58.427 [2024-07-15 22:38:22.367400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.427 [2024-07-15 22:38:22.367412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.427 [2024-07-15 22:38:22.367421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.427 [2024-07-15 22:38:22.367500] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.427 [2024-07-15 22:38:22.367506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.427 [2024-07-15 22:38:22.367508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.427 [2024-07-15 22:38:22.367520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.427 [2024-07-15 22:38:22.367532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.427 [2024-07-15 22:38:22.367542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.427 [2024-07-15 22:38:22.367618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.427 [2024-07-15 22:38:22.367624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.427 [2024-07-15 22:38:22.367627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.427 [2024-07-15 22:38:22.367638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.427 [2024-07-15 22:38:22.367650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.427 [2024-07-15 22:38:22.367659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.427 [2024-07-15 22:38:22.367732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.427 [2024-07-15 22:38:22.367738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.427 [2024-07-15 22:38:22.367741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.427 [2024-07-15 22:38:22.367752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.427 [2024-07-15 22:38:22.367767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.427 [2024-07-15 22:38:22.367776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.427 [2024-07-15 22:38:22.367860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.427 [2024-07-15 22:38:22.367866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.427 [2024-07-15 22:38:22.367869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.427 [2024-07-15 22:38:22.367880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.427 [2024-07-15 22:38:22.367887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.428 [2024-07-15 22:38:22.367892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.428 [2024-07-15 22:38:22.367901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.428 [2024-07-15 22:38:22.367983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.428 [2024-07-15 22:38:22.367988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.428 [2024-07-15 22:38:22.367991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.367994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.428 [2024-07-15 22:38:22.368002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.428 [2024-07-15 22:38:22.368014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.428 [2024-07-15 22:38:22.368024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.428 [2024-07-15 22:38:22.368107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.428 [2024-07-15 22:38:22.368112] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.428 [2024-07-15 22:38:22.368115] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.428 [2024-07-15 22:38:22.368127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.428 [2024-07-15 22:38:22.368139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.428 [2024-07-15 22:38:22.368148] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.428 [2024-07-15 22:38:22.368239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.428 [2024-07-15 22:38:22.368245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.428 [2024-07-15 22:38:22.368248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.428 [2024-07-15 22:38:22.368259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.428 [2024-07-15 22:38:22.368274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.428 [2024-07-15 22:38:22.368284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.428 [2024-07-15 22:38:22.368364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.428 [2024-07-15 22:38:22.368369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.428 [2024-07-15 22:38:22.368372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.428 [2024-07-15 22:38:22.368384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.428 [2024-07-15 22:38:22.368396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.428 [2024-07-15 22:38:22.368405] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.428 [2024-07-15 22:38:22.368486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.428 [2024-07-15 22:38:22.368492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.428 [2024-07-15 22:38:22.368495] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.428 [2024-07-15 22:38:22.368506] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.428 [2024-07-15 22:38:22.368518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.428 [2024-07-15 22:38:22.368527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.428 [2024-07-15 22:38:22.368608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.428 [2024-07-15 22:38:22.368614] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.428 [2024-07-15 22:38:22.368617] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368620] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.428 [2024-07-15 22:38:22.368628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368635] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.428 [2024-07-15 22:38:22.368640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.428 [2024-07-15 22:38:22.368649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.428 [2024-07-15 22:38:22.368729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.428 [2024-07-15 22:38:22.368734] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.428 [2024-07-15 22:38:22.368737] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368740] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.428 [2024-07-15 22:38:22.368749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.428 [2024-07-15 22:38:22.368764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.428 [2024-07-15 22:38:22.368773] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.428 [2024-07-15 22:38:22.368852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.428 [2024-07-15 22:38:22.368858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.428 [2024-07-15 22:38:22.368861] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.428 [2024-07-15 22:38:22.368872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.428 [2024-07-15 22:38:22.368878] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.429 [2024-07-15 22:38:22.368884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.429 [2024-07-15 22:38:22.368894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.429 [2024-07-15 22:38:22.368980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.429 [2024-07-15 22:38:22.368986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.429 [2024-07-15 22:38:22.368989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.368992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.429 [2024-07-15 22:38:22.369000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.429 [2024-07-15 22:38:22.369012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.429 [2024-07-15 22:38:22.369021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.429 [2024-07-15 22:38:22.369100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.429 [2024-07-15 22:38:22.369106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.429 [2024-07-15 22:38:22.369108] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369112] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.429 [2024-07-15 22:38:22.369120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369126] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.429 [2024-07-15 22:38:22.369132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.429 [2024-07-15 22:38:22.369140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.429 [2024-07-15 22:38:22.369214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.429 [2024-07-15 22:38:22.369219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.429 [2024-07-15 22:38:22.369222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.429 [2024-07-15 22:38:22.369240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369247] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.429 [2024-07-15 22:38:22.369252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.429 [2024-07-15 22:38:22.369263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.429 [2024-07-15 22:38:22.369344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.429 [2024-07-15 22:38:22.369350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.429 [2024-07-15 22:38:22.369353] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.429 [2024-07-15 22:38:22.369364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369367] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369370] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.429 [2024-07-15 22:38:22.369376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.429 [2024-07-15 22:38:22.369385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.429 [2024-07-15 22:38:22.369463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.429 [2024-07-15 22:38:22.369468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.429 [2024-07-15 22:38:22.369471] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.429 [2024-07-15 22:38:22.369483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.429 [2024-07-15 22:38:22.369495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.429 [2024-07-15 22:38:22.369504] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.429 [2024-07-15 22:38:22.369582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.429 [2024-07-15 22:38:22.369587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.429 [2024-07-15 22:38:22.369590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.429 [2024-07-15 22:38:22.369601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.429 [2024-07-15 22:38:22.369614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.429 [2024-07-15 22:38:22.369623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.429 [2024-07-15 22:38:22.369702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.429 [2024-07-15 22:38:22.369708] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.429 [2024-07-15 22:38:22.369711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369714] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.429 [2024-07-15 22:38:22.369722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.429 [2024-07-15 22:38:22.369734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.429 [2024-07-15 22:38:22.369744] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.429 [2024-07-15 22:38:22.369824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.429 [2024-07-15 22:38:22.369829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.429 [2024-07-15 22:38:22.369832] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.429 [2024-07-15 22:38:22.369843] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.429 [2024-07-15 22:38:22.369856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.429 [2024-07-15 22:38:22.369865] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.429 [2024-07-15 22:38:22.369940] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.429 [2024-07-15 22:38:22.369946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.429 [2024-07-15 22:38:22.369949] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.429 [2024-07-15 22:38:22.369960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.369966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.429 [2024-07-15 22:38:22.369972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.429 [2024-07-15 22:38:22.369981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.429 [2024-07-15 22:38:22.370055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.429 [2024-07-15 22:38:22.370061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.429 [2024-07-15 22:38:22.370063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.370066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.429 [2024-07-15 22:38:22.370074] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.370078] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.370081] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.429 [2024-07-15 22:38:22.370087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.429 [2024-07-15 22:38:22.370096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.429 [2024-07-15 22:38:22.370174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.429 [2024-07-15 22:38:22.370180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.429 [2024-07-15 22:38:22.370183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.429 [2024-07-15 22:38:22.370186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.429 [2024-07-15 22:38:22.370194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.430 [2024-07-15 22:38:22.370197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.430 [2024-07-15 22:38:22.370200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.430 [2024-07-15 22:38:22.370206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.430 [2024-07-15 22:38:22.370215] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.430 [2024-07-15 22:38:22.374236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.430 [2024-07-15 22:38:22.374245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.430 [2024-07-15 22:38:22.374248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.430 [2024-07-15 22:38:22.374252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.430 [2024-07-15 22:38:22.374261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.430 [2024-07-15 22:38:22.374264] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.430 [2024-07-15 22:38:22.374267] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ac5ec0) 00:21:58.430 [2024-07-15 22:38:22.374274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.430 [2024-07-15 22:38:22.374285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b492c0, cid 3, qid 0 00:21:58.430 [2024-07-15 22:38:22.374393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.430 [2024-07-15 22:38:22.374398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.430 [2024-07-15 22:38:22.374401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.430 [2024-07-15 22:38:22.374404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b492c0) on tqpair=0x1ac5ec0 00:21:58.430 [2024-07-15 22:38:22.374412] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:21:58.430 00:21:58.430 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:58.692 [2024-07-15 22:38:22.411718] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:58.692 [2024-07-15 22:38:22.411750] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82470 ] 00:21:58.692 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.692 [2024-07-15 22:38:22.442582] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:58.693 [2024-07-15 22:38:22.442628] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:58.693 [2024-07-15 22:38:22.442633] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:58.693 [2024-07-15 22:38:22.442645] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:58.693 [2024-07-15 22:38:22.442650] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:58.693 [2024-07-15 22:38:22.443007] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:58.693 [2024-07-15 22:38:22.443029] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x6c8ec0 0 00:21:58.693 [2024-07-15 22:38:22.457238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:58.693 [2024-07-15 22:38:22.457248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:58.693 [2024-07-15 22:38:22.457251] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:58.693 [2024-07-15 22:38:22.457254] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:58.693 [2024-07-15 22:38:22.457280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.457285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.457288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6c8ec0) 00:21:58.693 [2024-07-15 22:38:22.457300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:58.693 [2024-07-15 22:38:22.457315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74be40, cid 0, qid 0 00:21:58.693 [2024-07-15 22:38:22.465235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-15 22:38:22.465244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-15 22:38:22.465247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465251] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74be40) on tqpair=0x6c8ec0 00:21:58.693 [2024-07-15 22:38:22.465259] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:58.693 [2024-07-15 22:38:22.465265] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:58.693 [2024-07-15 22:38:22.465269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:58.693 [2024-07-15 22:38:22.465280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6c8ec0) 00:21:58.693 [2024-07-15 22:38:22.465294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-15 22:38:22.465306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74be40, cid 0, qid 0 00:21:58.693 [2024-07-15 22:38:22.465494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-15 22:38:22.465500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-15 22:38:22.465504] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465507] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74be40) on tqpair=0x6c8ec0 00:21:58.693 [2024-07-15 22:38:22.465511] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:58.693 [2024-07-15 22:38:22.465518] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:58.693 [2024-07-15 22:38:22.465523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465530] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6c8ec0) 00:21:58.693 [2024-07-15 22:38:22.465536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-15 22:38:22.465547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74be40, cid 0, qid 0 00:21:58.693 [2024-07-15 22:38:22.465640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-15 22:38:22.465646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-15 22:38:22.465649] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74be40) on tqpair=0x6c8ec0 00:21:58.693 [2024-07-15 22:38:22.465657] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:58.693 [2024-07-15 22:38:22.465663] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:58.693 [2024-07-15 22:38:22.465669] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6c8ec0) 00:21:58.693 [2024-07-15 22:38:22.465681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-15 22:38:22.465694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74be40, cid 0, qid 0 00:21:58.693 [2024-07-15 22:38:22.465771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-15 22:38:22.465777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-15 22:38:22.465780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74be40) on tqpair=0x6c8ec0 00:21:58.693 [2024-07-15 22:38:22.465788] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:58.693 [2024-07-15 22:38:22.465796] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6c8ec0) 00:21:58.693 [2024-07-15 22:38:22.465808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-15 22:38:22.465817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74be40, cid 0, qid 0 00:21:58.693 [2024-07-15 22:38:22.465943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-15 22:38:22.465949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-15 22:38:22.465952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.465955] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74be40) on tqpair=0x6c8ec0 00:21:58.693 [2024-07-15 22:38:22.465959] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:58.693 [2024-07-15 22:38:22.465963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:58.693 [2024-07-15 22:38:22.465969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:58.693 [2024-07-15 22:38:22.466074] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:58.693 [2024-07-15 22:38:22.466077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:58.693 [2024-07-15 22:38:22.466084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.466087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.466090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6c8ec0) 00:21:58.693 [2024-07-15 22:38:22.466095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-15 22:38:22.466105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74be40, cid 0, qid 0 00:21:58.693 [2024-07-15 22:38:22.466185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-15 22:38:22.466191] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-15 22:38:22.466194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.466197] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74be40) on tqpair=0x6c8ec0 00:21:58.693 [2024-07-15 22:38:22.466201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:58.693 [2024-07-15 22:38:22.466209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.466212] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.466215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6c8ec0) 00:21:58.693 [2024-07-15 22:38:22.466221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-15 22:38:22.466241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74be40, cid 0, qid 0 00:21:58.693 [2024-07-15 22:38:22.466337] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-15 22:38:22.466342] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-15 22:38:22.466345] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.466349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74be40) on tqpair=0x6c8ec0 00:21:58.693 [2024-07-15 22:38:22.466352] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:58.693 [2024-07-15 22:38:22.466356] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:58.693 [2024-07-15 22:38:22.466363] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:58.693 [2024-07-15 22:38:22.466374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:58.693 [2024-07-15 22:38:22.466383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.466386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6c8ec0) 00:21:58.693 [2024-07-15 22:38:22.466392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.693 [2024-07-15 22:38:22.466402] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74be40, cid 0, qid 0 00:21:58.693 [2024-07-15 22:38:22.466524] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.693 [2024-07-15 22:38:22.466530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.693 [2024-07-15 22:38:22.466533] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.466536] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6c8ec0): datao=0, datal=4096, cccid=0 00:21:58.693 [2024-07-15 22:38:22.466540] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74be40) on tqpair(0x6c8ec0): expected_datao=0, payload_size=4096 00:21:58.693 [2024-07-15 22:38:22.466543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.466575] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.466579] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.466639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.693 [2024-07-15 22:38:22.466644] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.693 [2024-07-15 22:38:22.466647] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.693 [2024-07-15 22:38:22.466651] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74be40) on tqpair=0x6c8ec0 00:21:58.693 [2024-07-15 22:38:22.466657] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:58.694 [2024-07-15 22:38:22.466664] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:58.694 [2024-07-15 22:38:22.466668] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:58.694 [2024-07-15 22:38:22.466671] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:58.694 [2024-07-15 22:38:22.466675] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:58.694 [2024-07-15 22:38:22.466679] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.466687] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.466694] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.466698] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.466701] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6c8ec0) 00:21:58.694 [2024-07-15 22:38:22.466707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.694 [2024-07-15 22:38:22.466717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74be40, cid 0, qid 0 00:21:58.694 [2024-07-15 22:38:22.466795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.694 [2024-07-15 22:38:22.466800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.694 [2024-07-15 22:38:22.466803] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.466806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74be40) on tqpair=0x6c8ec0 00:21:58.694 [2024-07-15 22:38:22.466812] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.466815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.466818] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x6c8ec0) 00:21:58.694 [2024-07-15 22:38:22.466823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.694 [2024-07-15 22:38:22.466828] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.466832] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.466834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x6c8ec0) 00:21:58.694 [2024-07-15 22:38:22.466839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.694 [2024-07-15 22:38:22.466845] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.466848] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.466851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x6c8ec0) 00:21:58.694 [2024-07-15 22:38:22.466856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.694 [2024-07-15 22:38:22.466861] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.466864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.466867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.694 [2024-07-15 22:38:22.466872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.694 [2024-07-15 22:38:22.466876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.466886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.466892] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.466895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6c8ec0) 00:21:58.694 [2024-07-15 22:38:22.466901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.694 [2024-07-15 22:38:22.466912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74be40, cid 0, qid 0 00:21:58.694 [2024-07-15 22:38:22.466916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74bfc0, cid 1, qid 0 00:21:58.694 [2024-07-15 22:38:22.466921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c140, cid 2, qid 0 00:21:58.694 [2024-07-15 22:38:22.466924] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.694 [2024-07-15 22:38:22.466930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c440, cid 4, qid 0 00:21:58.694 [2024-07-15 22:38:22.467062] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.694 [2024-07-15 22:38:22.467068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.694 [2024-07-15 22:38:22.467071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.467074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c440) on tqpair=0x6c8ec0 00:21:58.694 [2024-07-15 22:38:22.467078] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:58.694 [2024-07-15 22:38:22.467083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.467090] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.467096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.467101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.467104] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.467108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6c8ec0) 00:21:58.694 [2024-07-15 22:38:22.467113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:58.694 [2024-07-15 22:38:22.467122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c440, cid 4, qid 0 00:21:58.694 [2024-07-15 22:38:22.467212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.694 [2024-07-15 22:38:22.467218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.694 [2024-07-15 22:38:22.467221] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.467230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c440) on tqpair=0x6c8ec0 00:21:58.694 [2024-07-15 22:38:22.467282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.467291] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.467298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.467301] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6c8ec0) 00:21:58.694 [2024-07-15 22:38:22.467307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.694 [2024-07-15 22:38:22.467317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c440, cid 4, qid 0 00:21:58.694 [2024-07-15 22:38:22.467406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.694 [2024-07-15 22:38:22.467412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.694 [2024-07-15 22:38:22.467415] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.467419] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6c8ec0): datao=0, datal=4096, cccid=4 00:21:58.694 [2024-07-15 22:38:22.467422] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74c440) on tqpair(0x6c8ec0): expected_datao=0, payload_size=4096 00:21:58.694 [2024-07-15 22:38:22.467426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.467454] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.467458] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.512234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.694 [2024-07-15 22:38:22.512248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.694 [2024-07-15 22:38:22.512254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.512258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c440) on tqpair=0x6c8ec0 00:21:58.694 [2024-07-15 22:38:22.512267] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:58.694 [2024-07-15 22:38:22.512276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.512286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.512293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.512296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6c8ec0) 00:21:58.694 [2024-07-15 22:38:22.512303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.694 [2024-07-15 22:38:22.512316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c440, cid 4, qid 0 00:21:58.694 [2024-07-15 22:38:22.512495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.694 [2024-07-15 22:38:22.512501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.694 [2024-07-15 22:38:22.512504] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.512507] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6c8ec0): datao=0, datal=4096, cccid=4 00:21:58.694 [2024-07-15 22:38:22.512512] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74c440) on tqpair(0x6c8ec0): expected_datao=0, payload_size=4096 00:21:58.694 [2024-07-15 22:38:22.512515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.512544] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.512548] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.553414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.694 [2024-07-15 22:38:22.553424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.694 [2024-07-15 22:38:22.553427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.553430] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c440) on tqpair=0x6c8ec0 00:21:58.694 [2024-07-15 22:38:22.553443] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.553454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:58.694 [2024-07-15 22:38:22.553461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.553465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6c8ec0) 00:21:58.694 [2024-07-15 22:38:22.553472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.694 [2024-07-15 22:38:22.553484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c440, cid 4, qid 0 00:21:58.694 [2024-07-15 22:38:22.553574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.694 [2024-07-15 22:38:22.553580] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.694 [2024-07-15 22:38:22.553583] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.694 [2024-07-15 22:38:22.553586] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6c8ec0): datao=0, datal=4096, cccid=4 00:21:58.694 [2024-07-15 22:38:22.553590] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74c440) on tqpair(0x6c8ec0): expected_datao=0, payload_size=4096 00:21:58.695 [2024-07-15 22:38:22.553594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.553667] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.553671] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.594384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.695 [2024-07-15 22:38:22.594395] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.695 [2024-07-15 22:38:22.594398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.594401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c440) on tqpair=0x6c8ec0 00:21:58.695 [2024-07-15 22:38:22.594410] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:58.695 [2024-07-15 22:38:22.594418] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:58.695 [2024-07-15 22:38:22.594426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:58.695 [2024-07-15 22:38:22.594431] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:58.695 [2024-07-15 22:38:22.594436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:58.695 [2024-07-15 22:38:22.594440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:58.695 [2024-07-15 22:38:22.594445] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:58.695 [2024-07-15 22:38:22.594449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:58.695 [2024-07-15 22:38:22.594453] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:58.695 [2024-07-15 22:38:22.594467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.594470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6c8ec0) 00:21:58.695 [2024-07-15 22:38:22.594478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.695 [2024-07-15 22:38:22.594484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.594487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.594490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6c8ec0) 00:21:58.695 [2024-07-15 22:38:22.594495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:58.695 [2024-07-15 22:38:22.594509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c440, cid 4, qid 0 00:21:58.695 [2024-07-15 22:38:22.594514] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c5c0, cid 5, qid 0 00:21:58.695 [2024-07-15 22:38:22.594649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.695 [2024-07-15 22:38:22.594655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.695 [2024-07-15 22:38:22.594658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.594661] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c440) on tqpair=0x6c8ec0 00:21:58.695 [2024-07-15 22:38:22.594666] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.695 [2024-07-15 22:38:22.594671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.695 [2024-07-15 22:38:22.594674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.594678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c5c0) on tqpair=0x6c8ec0 00:21:58.695 [2024-07-15 22:38:22.594686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.594692] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6c8ec0) 00:21:58.695 [2024-07-15 22:38:22.594698] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.695 [2024-07-15 22:38:22.594707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c5c0, cid 5, qid 0 00:21:58.695 [2024-07-15 22:38:22.594790] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.695 [2024-07-15 22:38:22.594796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.695 [2024-07-15 22:38:22.594799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.594802] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c5c0) on tqpair=0x6c8ec0 00:21:58.695 [2024-07-15 22:38:22.594810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.594813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6c8ec0) 00:21:58.695 [2024-07-15 22:38:22.594819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.695 [2024-07-15 22:38:22.594828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c5c0, cid 5, qid 0 00:21:58.695 [2024-07-15 22:38:22.594947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.695 [2024-07-15 22:38:22.594952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.695 [2024-07-15 22:38:22.594955] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.594958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c5c0) on tqpair=0x6c8ec0 00:21:58.695 [2024-07-15 22:38:22.594965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.594969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6c8ec0) 00:21:58.695 [2024-07-15 22:38:22.594974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.695 [2024-07-15 22:38:22.594983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c5c0, cid 5, qid 0 00:21:58.695 [2024-07-15 22:38:22.595098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.695 [2024-07-15 22:38:22.595104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.695 [2024-07-15 22:38:22.595107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.595110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c5c0) on tqpair=0x6c8ec0 00:21:58.695 [2024-07-15 22:38:22.595123] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.595127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x6c8ec0) 00:21:58.695 [2024-07-15 22:38:22.595132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.695 [2024-07-15 22:38:22.595138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.595142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x6c8ec0) 00:21:58.695 [2024-07-15 22:38:22.595147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.695 [2024-07-15 22:38:22.595153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.595156] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x6c8ec0) 00:21:58.695 [2024-07-15 22:38:22.595161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.695 [2024-07-15 22:38:22.595167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.595170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6c8ec0) 00:21:58.695 [2024-07-15 22:38:22.595177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.695 [2024-07-15 22:38:22.595188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c5c0, cid 5, qid 0 00:21:58.695 [2024-07-15 22:38:22.595192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c440, cid 4, qid 0 00:21:58.695 [2024-07-15 22:38:22.595196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c740, cid 6, qid 0 00:21:58.695 [2024-07-15 22:38:22.595200] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c8c0, cid 7, qid 0 00:21:58.695 [2024-07-15 22:38:22.599238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.695 [2024-07-15 22:38:22.599245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.695 [2024-07-15 22:38:22.599248] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599251] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6c8ec0): datao=0, datal=8192, cccid=5 00:21:58.695 [2024-07-15 22:38:22.599255] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74c5c0) on tqpair(0x6c8ec0): expected_datao=0, payload_size=8192 00:21:58.695 [2024-07-15 22:38:22.599259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599265] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599268] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.695 [2024-07-15 22:38:22.599278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.695 [2024-07-15 22:38:22.599281] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599284] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6c8ec0): datao=0, datal=512, cccid=4 00:21:58.695 [2024-07-15 22:38:22.599288] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74c440) on tqpair(0x6c8ec0): expected_datao=0, payload_size=512 00:21:58.695 [2024-07-15 22:38:22.599291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599296] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599300] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.695 [2024-07-15 22:38:22.599309] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.695 [2024-07-15 22:38:22.599312] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599315] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6c8ec0): datao=0, datal=512, cccid=6 00:21:58.695 [2024-07-15 22:38:22.599319] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74c740) on tqpair(0x6c8ec0): expected_datao=0, payload_size=512 00:21:58.695 [2024-07-15 22:38:22.599323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599328] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599331] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599336] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:58.695 [2024-07-15 22:38:22.599341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:58.695 [2024-07-15 22:38:22.599344] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599347] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x6c8ec0): datao=0, datal=4096, cccid=7 00:21:58.695 [2024-07-15 22:38:22.599351] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x74c8c0) on tqpair(0x6c8ec0): expected_datao=0, payload_size=4096 00:21:58.695 [2024-07-15 22:38:22.599354] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599360] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599363] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.695 [2024-07-15 22:38:22.599375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.695 [2024-07-15 22:38:22.599378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.695 [2024-07-15 22:38:22.599381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c5c0) on tqpair=0x6c8ec0 00:21:58.695 [2024-07-15 22:38:22.599391] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.696 [2024-07-15 22:38:22.599396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.696 [2024-07-15 22:38:22.599399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.696 [2024-07-15 22:38:22.599403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c440) on tqpair=0x6c8ec0 00:21:58.696 [2024-07-15 22:38:22.599411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.696 [2024-07-15 22:38:22.599416] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.696 [2024-07-15 22:38:22.599419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.696 [2024-07-15 22:38:22.599422] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c740) on tqpair=0x6c8ec0 00:21:58.696 [2024-07-15 22:38:22.599428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.696 [2024-07-15 22:38:22.599433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.696 [2024-07-15 22:38:22.599436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.696 [2024-07-15 22:38:22.599439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c8c0) on tqpair=0x6c8ec0 00:21:58.696 ===================================================== 00:21:58.696 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.696 ===================================================== 00:21:58.696 Controller Capabilities/Features 00:21:58.696 ================================ 00:21:58.696 Vendor ID: 8086 00:21:58.696 Subsystem Vendor ID: 8086 00:21:58.696 Serial Number: SPDK00000000000001 00:21:58.696 Model Number: SPDK bdev Controller 00:21:58.696 Firmware Version: 24.09 00:21:58.696 Recommended Arb Burst: 6 00:21:58.696 IEEE OUI Identifier: e4 d2 5c 00:21:58.696 Multi-path I/O 00:21:58.696 May have multiple subsystem ports: Yes 00:21:58.696 May have multiple controllers: Yes 00:21:58.696 Associated with SR-IOV VF: No 00:21:58.696 Max Data Transfer Size: 131072 00:21:58.696 Max Number of Namespaces: 32 00:21:58.696 Max Number of I/O Queues: 127 00:21:58.696 NVMe Specification Version (VS): 1.3 00:21:58.696 NVMe Specification Version (Identify): 1.3 00:21:58.696 Maximum Queue Entries: 128 00:21:58.696 Contiguous Queues Required: Yes 00:21:58.696 Arbitration Mechanisms Supported 00:21:58.696 Weighted Round Robin: Not Supported 00:21:58.696 Vendor Specific: Not Supported 00:21:58.696 Reset Timeout: 15000 ms 00:21:58.696 Doorbell Stride: 4 bytes 00:21:58.696 NVM Subsystem Reset: Not Supported 00:21:58.696 Command Sets Supported 00:21:58.696 NVM Command Set: Supported 00:21:58.696 Boot Partition: Not Supported 00:21:58.696 Memory Page Size Minimum: 4096 bytes 00:21:58.696 Memory Page Size Maximum: 4096 bytes 00:21:58.696 Persistent Memory Region: Not Supported 00:21:58.696 Optional Asynchronous Events Supported 00:21:58.696 Namespace Attribute Notices: Supported 00:21:58.696 Firmware Activation Notices: Not Supported 00:21:58.696 ANA Change Notices: Not Supported 00:21:58.696 PLE Aggregate Log Change Notices: Not Supported 00:21:58.696 LBA Status Info Alert Notices: Not Supported 00:21:58.696 EGE Aggregate Log Change Notices: Not Supported 00:21:58.696 Normal NVM Subsystem Shutdown event: Not Supported 00:21:58.696 Zone Descriptor Change Notices: Not Supported 00:21:58.696 Discovery Log Change Notices: Not Supported 00:21:58.696 Controller Attributes 00:21:58.696 128-bit Host Identifier: Supported 00:21:58.696 Non-Operational Permissive Mode: Not Supported 00:21:58.696 NVM Sets: Not Supported 00:21:58.696 Read Recovery Levels: Not Supported 00:21:58.696 Endurance Groups: Not Supported 00:21:58.696 Predictable Latency Mode: Not Supported 00:21:58.696 Traffic Based Keep ALive: Not Supported 00:21:58.696 Namespace Granularity: Not Supported 00:21:58.696 SQ Associations: Not Supported 00:21:58.696 UUID List: Not Supported 00:21:58.696 Multi-Domain Subsystem: Not Supported 00:21:58.696 Fixed Capacity Management: Not Supported 00:21:58.696 Variable Capacity Management: Not Supported 00:21:58.696 Delete Endurance Group: Not Supported 00:21:58.696 Delete NVM Set: Not Supported 00:21:58.696 Extended LBA Formats Supported: Not Supported 00:21:58.696 Flexible Data Placement Supported: Not Supported 00:21:58.696 00:21:58.696 Controller Memory Buffer Support 00:21:58.696 ================================ 00:21:58.696 Supported: No 00:21:58.696 00:21:58.696 Persistent Memory Region Support 00:21:58.696 ================================ 00:21:58.696 Supported: No 00:21:58.696 00:21:58.696 Admin Command Set Attributes 00:21:58.696 ============================ 00:21:58.696 Security Send/Receive: Not Supported 00:21:58.696 Format NVM: Not Supported 00:21:58.696 Firmware Activate/Download: Not Supported 00:21:58.696 Namespace Management: Not Supported 00:21:58.696 Device Self-Test: Not Supported 00:21:58.696 Directives: Not Supported 00:21:58.696 NVMe-MI: Not Supported 00:21:58.696 Virtualization Management: Not Supported 00:21:58.696 Doorbell Buffer Config: Not Supported 00:21:58.696 Get LBA Status Capability: Not Supported 00:21:58.696 Command & Feature Lockdown Capability: Not Supported 00:21:58.696 Abort Command Limit: 4 00:21:58.696 Async Event Request Limit: 4 00:21:58.696 Number of Firmware Slots: N/A 00:21:58.696 Firmware Slot 1 Read-Only: N/A 00:21:58.696 Firmware Activation Without Reset: N/A 00:21:58.696 Multiple Update Detection Support: N/A 00:21:58.696 Firmware Update Granularity: No Information Provided 00:21:58.696 Per-Namespace SMART Log: No 00:21:58.696 Asymmetric Namespace Access Log Page: Not Supported 00:21:58.696 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:58.696 Command Effects Log Page: Supported 00:21:58.696 Get Log Page Extended Data: Supported 00:21:58.696 Telemetry Log Pages: Not Supported 00:21:58.696 Persistent Event Log Pages: Not Supported 00:21:58.696 Supported Log Pages Log Page: May Support 00:21:58.696 Commands Supported & Effects Log Page: Not Supported 00:21:58.696 Feature Identifiers & Effects Log Page:May Support 00:21:58.696 NVMe-MI Commands & Effects Log Page: May Support 00:21:58.696 Data Area 4 for Telemetry Log: Not Supported 00:21:58.696 Error Log Page Entries Supported: 128 00:21:58.696 Keep Alive: Supported 00:21:58.696 Keep Alive Granularity: 10000 ms 00:21:58.696 00:21:58.696 NVM Command Set Attributes 00:21:58.696 ========================== 00:21:58.696 Submission Queue Entry Size 00:21:58.696 Max: 64 00:21:58.696 Min: 64 00:21:58.696 Completion Queue Entry Size 00:21:58.696 Max: 16 00:21:58.696 Min: 16 00:21:58.696 Number of Namespaces: 32 00:21:58.696 Compare Command: Supported 00:21:58.696 Write Uncorrectable Command: Not Supported 00:21:58.696 Dataset Management Command: Supported 00:21:58.696 Write Zeroes Command: Supported 00:21:58.696 Set Features Save Field: Not Supported 00:21:58.696 Reservations: Supported 00:21:58.696 Timestamp: Not Supported 00:21:58.696 Copy: Supported 00:21:58.696 Volatile Write Cache: Present 00:21:58.696 Atomic Write Unit (Normal): 1 00:21:58.696 Atomic Write Unit (PFail): 1 00:21:58.696 Atomic Compare & Write Unit: 1 00:21:58.696 Fused Compare & Write: Supported 00:21:58.696 Scatter-Gather List 00:21:58.696 SGL Command Set: Supported 00:21:58.696 SGL Keyed: Supported 00:21:58.696 SGL Bit Bucket Descriptor: Not Supported 00:21:58.696 SGL Metadata Pointer: Not Supported 00:21:58.696 Oversized SGL: Not Supported 00:21:58.696 SGL Metadata Address: Not Supported 00:21:58.696 SGL Offset: Supported 00:21:58.696 Transport SGL Data Block: Not Supported 00:21:58.696 Replay Protected Memory Block: Not Supported 00:21:58.696 00:21:58.696 Firmware Slot Information 00:21:58.696 ========================= 00:21:58.696 Active slot: 1 00:21:58.696 Slot 1 Firmware Revision: 24.09 00:21:58.696 00:21:58.696 00:21:58.696 Commands Supported and Effects 00:21:58.696 ============================== 00:21:58.696 Admin Commands 00:21:58.696 -------------- 00:21:58.696 Get Log Page (02h): Supported 00:21:58.696 Identify (06h): Supported 00:21:58.696 Abort (08h): Supported 00:21:58.696 Set Features (09h): Supported 00:21:58.696 Get Features (0Ah): Supported 00:21:58.696 Asynchronous Event Request (0Ch): Supported 00:21:58.696 Keep Alive (18h): Supported 00:21:58.696 I/O Commands 00:21:58.696 ------------ 00:21:58.696 Flush (00h): Supported LBA-Change 00:21:58.696 Write (01h): Supported LBA-Change 00:21:58.696 Read (02h): Supported 00:21:58.696 Compare (05h): Supported 00:21:58.696 Write Zeroes (08h): Supported LBA-Change 00:21:58.696 Dataset Management (09h): Supported LBA-Change 00:21:58.696 Copy (19h): Supported LBA-Change 00:21:58.696 00:21:58.696 Error Log 00:21:58.696 ========= 00:21:58.696 00:21:58.696 Arbitration 00:21:58.696 =========== 00:21:58.696 Arbitration Burst: 1 00:21:58.696 00:21:58.696 Power Management 00:21:58.696 ================ 00:21:58.696 Number of Power States: 1 00:21:58.696 Current Power State: Power State #0 00:21:58.696 Power State #0: 00:21:58.696 Max Power: 0.00 W 00:21:58.696 Non-Operational State: Operational 00:21:58.696 Entry Latency: Not Reported 00:21:58.696 Exit Latency: Not Reported 00:21:58.696 Relative Read Throughput: 0 00:21:58.696 Relative Read Latency: 0 00:21:58.696 Relative Write Throughput: 0 00:21:58.696 Relative Write Latency: 0 00:21:58.696 Idle Power: Not Reported 00:21:58.696 Active Power: Not Reported 00:21:58.696 Non-Operational Permissive Mode: Not Supported 00:21:58.696 00:21:58.696 Health Information 00:21:58.696 ================== 00:21:58.696 Critical Warnings: 00:21:58.696 Available Spare Space: OK 00:21:58.696 Temperature: OK 00:21:58.696 Device Reliability: OK 00:21:58.696 Read Only: No 00:21:58.697 Volatile Memory Backup: OK 00:21:58.697 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:58.697 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:58.697 Available Spare: 0% 00:21:58.697 Available Spare Threshold: 0% 00:21:58.697 Life Percentage Used:[2024-07-15 22:38:22.599523] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.599527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x6c8ec0) 00:21:58.697 [2024-07-15 22:38:22.599533] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.697 [2024-07-15 22:38:22.599545] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c8c0, cid 7, qid 0 00:21:58.697 [2024-07-15 22:38:22.599731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.697 [2024-07-15 22:38:22.599737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.697 [2024-07-15 22:38:22.599740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.599744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c8c0) on tqpair=0x6c8ec0 00:21:58.697 [2024-07-15 22:38:22.599774] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:58.697 [2024-07-15 22:38:22.599784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74be40) on tqpair=0x6c8ec0 00:21:58.697 [2024-07-15 22:38:22.599790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.697 [2024-07-15 22:38:22.599794] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74bfc0) on tqpair=0x6c8ec0 00:21:58.697 [2024-07-15 22:38:22.599798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.697 [2024-07-15 22:38:22.599802] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c140) on tqpair=0x6c8ec0 00:21:58.697 [2024-07-15 22:38:22.599806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.697 [2024-07-15 22:38:22.599810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.697 [2024-07-15 22:38:22.599814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:58.697 [2024-07-15 22:38:22.599821] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.599824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.599827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.697 [2024-07-15 22:38:22.599835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.697 [2024-07-15 22:38:22.599847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.697 [2024-07-15 22:38:22.599930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.697 [2024-07-15 22:38:22.599936] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.697 [2024-07-15 22:38:22.599939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.599942] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.697 [2024-07-15 22:38:22.599948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.599951] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.599954] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.697 [2024-07-15 22:38:22.599960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.697 [2024-07-15 22:38:22.599972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.697 [2024-07-15 22:38:22.600058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.697 [2024-07-15 22:38:22.600064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.697 [2024-07-15 22:38:22.600067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.697 [2024-07-15 22:38:22.600074] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:58.697 [2024-07-15 22:38:22.600078] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:58.697 [2024-07-15 22:38:22.600086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.697 [2024-07-15 22:38:22.600098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.697 [2024-07-15 22:38:22.600108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.697 [2024-07-15 22:38:22.600237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.697 [2024-07-15 22:38:22.600243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.697 [2024-07-15 22:38:22.600247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.697 [2024-07-15 22:38:22.600258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.697 [2024-07-15 22:38:22.600270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.697 [2024-07-15 22:38:22.600280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.697 [2024-07-15 22:38:22.600382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.697 [2024-07-15 22:38:22.600388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.697 [2024-07-15 22:38:22.600391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.697 [2024-07-15 22:38:22.600402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600407] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.697 [2024-07-15 22:38:22.600416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.697 [2024-07-15 22:38:22.600425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.697 [2024-07-15 22:38:22.600534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.697 [2024-07-15 22:38:22.600539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.697 [2024-07-15 22:38:22.600542] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600546] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.697 [2024-07-15 22:38:22.600554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.697 [2024-07-15 22:38:22.600566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.697 [2024-07-15 22:38:22.600575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.697 [2024-07-15 22:38:22.600655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.697 [2024-07-15 22:38:22.600660] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.697 [2024-07-15 22:38:22.600663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.697 [2024-07-15 22:38:22.600675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600682] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.697 [2024-07-15 22:38:22.600687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.697 [2024-07-15 22:38:22.600696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.697 [2024-07-15 22:38:22.600786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.697 [2024-07-15 22:38:22.600792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.697 [2024-07-15 22:38:22.600795] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.697 [2024-07-15 22:38:22.600806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.697 [2024-07-15 22:38:22.600812] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.697 [2024-07-15 22:38:22.600818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.697 [2024-07-15 22:38:22.600827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.600938] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.600944] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.600946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.600950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.600958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.600961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.600966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.600971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.698 [2024-07-15 22:38:22.600981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.601089] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.601094] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.601097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.601108] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601112] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601115] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.601120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.698 [2024-07-15 22:38:22.601130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.601206] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.601212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.601214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.601231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601235] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601238] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.601244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.698 [2024-07-15 22:38:22.601253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.601341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.601347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.601350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.601361] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.601373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.698 [2024-07-15 22:38:22.601383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.601491] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.601497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.601500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.601511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.601525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.698 [2024-07-15 22:38:22.601534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.601642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.601648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.601651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.601662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.601674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.698 [2024-07-15 22:38:22.601684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.601759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.601765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.601768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.601780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601787] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.601792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.698 [2024-07-15 22:38:22.601801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.601897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.601903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.601906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.601917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601920] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.601923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.601929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.698 [2024-07-15 22:38:22.601938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.602046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.602051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.602054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.602065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602072] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.602077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.698 [2024-07-15 22:38:22.602088] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.602198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.602204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.602207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602210] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.602218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.602237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.698 [2024-07-15 22:38:22.602247] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.602326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.602332] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.602335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.602347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.602359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.698 [2024-07-15 22:38:22.602368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.602449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.602455] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.602458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.602469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602476] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.602481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.698 [2024-07-15 22:38:22.602491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.698 [2024-07-15 22:38:22.602601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.698 [2024-07-15 22:38:22.602607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.698 [2024-07-15 22:38:22.602610] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.698 [2024-07-15 22:38:22.602621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.698 [2024-07-15 22:38:22.602627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.698 [2024-07-15 22:38:22.602633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.699 [2024-07-15 22:38:22.602643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.699 [2024-07-15 22:38:22.602752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.699 [2024-07-15 22:38:22.602758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.699 [2024-07-15 22:38:22.602761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.602764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.699 [2024-07-15 22:38:22.602772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.602776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.602779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.699 [2024-07-15 22:38:22.602784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.699 [2024-07-15 22:38:22.602794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.699 [2024-07-15 22:38:22.602870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.699 [2024-07-15 22:38:22.602876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.699 [2024-07-15 22:38:22.602879] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.602882] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.699 [2024-07-15 22:38:22.602891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.602895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.602898] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.699 [2024-07-15 22:38:22.602904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.699 [2024-07-15 22:38:22.602912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.699 [2024-07-15 22:38:22.603004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.699 [2024-07-15 22:38:22.603010] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.699 [2024-07-15 22:38:22.603013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.603016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.699 [2024-07-15 22:38:22.603024] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.603027] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.603030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.699 [2024-07-15 22:38:22.603036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.699 [2024-07-15 22:38:22.603045] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.699 [2024-07-15 22:38:22.603155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.699 [2024-07-15 22:38:22.603161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.699 [2024-07-15 22:38:22.603164] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.603167] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.699 [2024-07-15 22:38:22.603175] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.603178] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.603181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.699 [2024-07-15 22:38:22.603187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.699 [2024-07-15 22:38:22.603196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.699 [2024-07-15 22:38:22.607233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.699 [2024-07-15 22:38:22.607243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.699 [2024-07-15 22:38:22.607246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.607249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.699 [2024-07-15 22:38:22.607258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.607262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.607265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x6c8ec0) 00:21:58.699 [2024-07-15 22:38:22.607271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:58.699 [2024-07-15 22:38:22.607283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x74c2c0, cid 3, qid 0 00:21:58.699 [2024-07-15 22:38:22.607446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:58.699 [2024-07-15 22:38:22.607452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:58.699 [2024-07-15 22:38:22.607455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:58.699 [2024-07-15 22:38:22.607458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x74c2c0) on tqpair=0x6c8ec0 00:21:58.699 [2024-07-15 22:38:22.607466] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:21:58.699 0% 00:21:58.699 Data Units Read: 0 00:21:58.699 Data Units Written: 0 00:21:58.699 Host Read Commands: 0 00:21:58.699 Host Write Commands: 0 00:21:58.699 Controller Busy Time: 0 minutes 00:21:58.699 Power Cycles: 0 00:21:58.699 Power On Hours: 0 hours 00:21:58.699 Unsafe Shutdowns: 0 00:21:58.699 Unrecoverable Media Errors: 0 00:21:58.699 Lifetime Error Log Entries: 0 00:21:58.699 Warning Temperature Time: 0 minutes 00:21:58.699 Critical Temperature Time: 0 minutes 00:21:58.699 00:21:58.699 Number of Queues 00:21:58.699 ================ 00:21:58.699 Number of I/O Submission Queues: 127 00:21:58.699 Number of I/O Completion Queues: 127 00:21:58.699 00:21:58.699 Active Namespaces 00:21:58.699 ================= 00:21:58.699 Namespace ID:1 00:21:58.699 Error Recovery Timeout: Unlimited 00:21:58.699 Command Set Identifier: NVM (00h) 00:21:58.699 Deallocate: Supported 00:21:58.699 Deallocated/Unwritten Error: Not Supported 00:21:58.699 Deallocated Read Value: Unknown 00:21:58.699 Deallocate in Write Zeroes: Not Supported 00:21:58.699 Deallocated Guard Field: 0xFFFF 00:21:58.699 Flush: Supported 00:21:58.699 Reservation: Supported 00:21:58.699 Namespace Sharing Capabilities: Multiple Controllers 00:21:58.699 Size (in LBAs): 131072 (0GiB) 00:21:58.699 Capacity (in LBAs): 131072 (0GiB) 00:21:58.699 Utilization (in LBAs): 131072 (0GiB) 00:21:58.699 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:58.699 EUI64: ABCDEF0123456789 00:21:58.699 UUID: c9a3cb6e-793f-4dbb-b154-dbaa07a394a3 00:21:58.699 Thin Provisioning: Not Supported 00:21:58.699 Per-NS Atomic Units: Yes 00:21:58.699 Atomic Boundary Size (Normal): 0 00:21:58.699 Atomic Boundary Size (PFail): 0 00:21:58.699 Atomic Boundary Offset: 0 00:21:58.699 Maximum Single Source Range Length: 65535 00:21:58.699 Maximum Copy Length: 65535 00:21:58.699 Maximum Source Range Count: 1 00:21:58.699 NGUID/EUI64 Never Reused: No 00:21:58.699 Namespace Write Protected: No 00:21:58.699 Number of LBA Formats: 1 00:21:58.699 Current LBA Format: LBA Format #00 00:21:58.699 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:58.699 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:58.699 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:58.699 rmmod nvme_tcp 00:21:58.699 rmmod nvme_fabrics 00:21:58.958 rmmod nvme_keyring 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 82267 ']' 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 82267 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 82267 ']' 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 82267 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82267 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82267' 00:21:58.958 killing process with pid 82267 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 82267 00:21:58.958 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 82267 00:21:59.216 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:59.216 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:59.216 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:59.216 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.216 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.216 22:38:22 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.216 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.216 22:38:22 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.119 22:38:25 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:01.119 00:22:01.119 real 0m9.314s 00:22:01.119 user 0m7.829s 00:22:01.119 sys 0m4.462s 00:22:01.119 22:38:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:01.119 22:38:25 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:01.119 ************************************ 00:22:01.119 END TEST nvmf_identify 00:22:01.119 ************************************ 00:22:01.119 22:38:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:01.119 22:38:25 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:01.119 22:38:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:01.119 22:38:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:01.119 22:38:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:01.119 ************************************ 00:22:01.119 START TEST nvmf_perf 00:22:01.119 ************************************ 00:22:01.119 22:38:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:01.378 * Looking for test storage... 00:22:01.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:01.378 22:38:25 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:06.651 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.651 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.651 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.651 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.651 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:06.652 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:06.652 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:06.652 Found net devices under 0000:86:00.0: cvl_0_0 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:06.652 Found net devices under 0000:86:00.1: cvl_0_1 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:06.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:22:06.652 00:22:06.652 --- 10.0.0.2 ping statistics --- 00:22:06.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.652 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:22:06.652 00:22:06.652 --- 10.0.0.1 ping statistics --- 00:22:06.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.652 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=85813 00:22:06.652 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 85813 00:22:06.653 22:38:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 85813 ']' 00:22:06.653 22:38:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.653 22:38:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.653 22:38:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.653 22:38:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.653 22:38:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:06.653 22:38:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:06.653 [2024-07-15 22:38:30.333850] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:22:06.653 [2024-07-15 22:38:30.333895] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.653 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.653 [2024-07-15 22:38:30.389916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:06.653 [2024-07-15 22:38:30.471073] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.653 [2024-07-15 22:38:30.471109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.653 [2024-07-15 22:38:30.471116] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.653 [2024-07-15 22:38:30.471122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.653 [2024-07-15 22:38:30.471127] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.653 [2024-07-15 22:38:30.471167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.653 [2024-07-15 22:38:30.471188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:06.653 [2024-07-15 22:38:30.471277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:06.653 [2024-07-15 22:38:30.471279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.220 22:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.220 22:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:07.220 22:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.220 22:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.220 22:38:31 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:07.220 22:38:31 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.220 22:38:31 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:07.220 22:38:31 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:10.511 22:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:10.511 22:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:10.511 22:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:10.511 22:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:10.770 22:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:10.770 22:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:10.770 22:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:10.770 22:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:10.770 22:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:10.770 [2024-07-15 22:38:34.716010] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.029 22:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:11.029 22:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:11.029 22:38:34 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:11.288 22:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:11.288 22:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:11.547 22:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.547 [2024-07-15 22:38:35.452918] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.547 22:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:11.806 22:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:11.806 22:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:11.806 22:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:11.806 22:38:35 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:13.183 Initializing NVMe Controllers 00:22:13.183 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:13.183 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:13.183 Initialization complete. Launching workers. 00:22:13.183 ======================================================== 00:22:13.183 Latency(us) 00:22:13.183 Device Information : IOPS MiB/s Average min max 00:22:13.183 PCIE (0000:5e:00.0) NSID 1 from core 0: 97838.51 382.18 326.55 34.62 5254.86 00:22:13.183 ======================================================== 00:22:13.183 Total : 97838.51 382.18 326.55 34.62 5254.86 00:22:13.183 00:22:13.184 22:38:36 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:13.184 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.560 Initializing NVMe Controllers 00:22:14.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:14.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:14.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:14.560 Initialization complete. Launching workers. 00:22:14.560 ======================================================== 00:22:14.560 Latency(us) 00:22:14.560 Device Information : IOPS MiB/s Average min max 00:22:14.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 87.00 0.34 11821.20 160.09 44968.08 00:22:14.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.00 0.21 19253.92 4978.38 47897.95 00:22:14.560 ======================================================== 00:22:14.560 Total : 140.00 0.55 14635.02 160.09 47897.95 00:22:14.560 00:22:14.561 22:38:38 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:14.561 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.497 Initializing NVMe Controllers 00:22:15.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:15.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:15.497 Initialization complete. Launching workers. 00:22:15.497 ======================================================== 00:22:15.497 Latency(us) 00:22:15.497 Device Information : IOPS MiB/s Average min max 00:22:15.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10607.10 41.43 3018.97 476.58 6282.22 00:22:15.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3861.67 15.08 8330.04 7101.86 15846.86 00:22:15.497 ======================================================== 00:22:15.497 Total : 14468.77 56.52 4436.48 476.58 15846.86 00:22:15.497 00:22:15.756 22:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:15.756 22:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:15.756 22:38:39 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:15.756 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.292 Initializing NVMe Controllers 00:22:18.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.292 Controller IO queue size 128, less than required. 00:22:18.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.292 Controller IO queue size 128, less than required. 00:22:18.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:18.292 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:18.292 Initialization complete. Launching workers. 00:22:18.292 ======================================================== 00:22:18.292 Latency(us) 00:22:18.292 Device Information : IOPS MiB/s Average min max 00:22:18.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1063.31 265.83 123085.45 70665.24 181698.31 00:22:18.292 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 638.08 159.52 214203.91 70325.87 335172.15 00:22:18.292 ======================================================== 00:22:18.292 Total : 1701.39 425.35 157258.22 70325.87 335172.15 00:22:18.292 00:22:18.292 22:38:41 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:18.292 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.292 No valid NVMe controllers or AIO or URING devices found 00:22:18.292 Initializing NVMe Controllers 00:22:18.292 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:18.292 Controller IO queue size 128, less than required. 00:22:18.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.292 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:18.292 Controller IO queue size 128, less than required. 00:22:18.292 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:18.292 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:18.292 WARNING: Some requested NVMe devices were skipped 00:22:18.292 22:38:42 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:18.292 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.855 Initializing NVMe Controllers 00:22:20.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:20.855 Controller IO queue size 128, less than required. 00:22:20.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:20.855 Controller IO queue size 128, less than required. 00:22:20.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:20.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:20.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:20.855 Initialization complete. Launching workers. 00:22:20.855 00:22:20.855 ==================== 00:22:20.855 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:20.855 TCP transport: 00:22:20.855 polls: 46321 00:22:20.855 idle_polls: 16604 00:22:20.855 sock_completions: 29717 00:22:20.855 nvme_completions: 4555 00:22:20.855 submitted_requests: 6804 00:22:20.855 queued_requests: 1 00:22:20.855 00:22:20.855 ==================== 00:22:20.855 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:20.855 TCP transport: 00:22:20.855 polls: 49550 00:22:20.855 idle_polls: 17538 00:22:20.855 sock_completions: 32012 00:22:20.855 nvme_completions: 4553 00:22:20.855 submitted_requests: 6678 00:22:20.855 queued_requests: 1 00:22:20.855 ======================================================== 00:22:20.855 Latency(us) 00:22:20.855 Device Information : IOPS MiB/s Average min max 00:22:20.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1137.23 284.31 114583.22 59518.05 157256.82 00:22:20.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1136.73 284.18 116284.98 48804.33 177924.89 00:22:20.855 ======================================================== 00:22:20.856 Total : 2273.96 568.49 115433.91 48804.33 177924.89 00:22:20.856 00:22:20.856 22:38:44 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:20.856 22:38:44 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:21.114 rmmod nvme_tcp 00:22:21.114 rmmod nvme_fabrics 00:22:21.114 rmmod nvme_keyring 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 85813 ']' 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 85813 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 85813 ']' 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 85813 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:21.114 22:38:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85813 00:22:21.114 22:38:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:21.114 22:38:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:21.114 22:38:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85813' 00:22:21.114 killing process with pid 85813 00:22:21.114 22:38:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 85813 00:22:21.114 22:38:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 85813 00:22:23.016 22:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.016 22:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.016 22:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.016 22:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.016 22:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.016 22:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.016 22:38:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.016 22:38:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.923 22:38:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:24.923 00:22:24.923 real 0m23.490s 00:22:24.923 user 1m4.159s 00:22:24.923 sys 0m6.688s 00:22:24.923 22:38:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:24.923 22:38:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:24.923 ************************************ 00:22:24.923 END TEST nvmf_perf 00:22:24.923 ************************************ 00:22:24.923 22:38:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:24.923 22:38:48 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:24.923 22:38:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:24.923 22:38:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:24.923 22:38:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:24.923 ************************************ 00:22:24.923 START TEST nvmf_fio_host 00:22:24.923 ************************************ 00:22:24.923 22:38:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:24.923 * Looking for test storage... 00:22:24.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:24.923 22:38:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.923 22:38:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.923 22:38:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.923 22:38:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.923 22:38:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.923 22:38:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.923 22:38:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.923 22:38:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:24.923 22:38:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.923 22:38:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.923 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:24.924 22:38:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:30.221 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:30.221 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.221 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:30.222 Found net devices under 0000:86:00.0: cvl_0_0 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:30.222 Found net devices under 0000:86:00.1: cvl_0_1 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:30.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:22:30.222 00:22:30.222 --- 10.0.0.2 ping statistics --- 00:22:30.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.222 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:22:30.222 00:22:30.222 --- 10.0.0.1 ping statistics --- 00:22:30.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.222 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=91902 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 91902 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 91902 ']' 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.222 22:38:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.222 [2024-07-15 22:38:53.683351] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:22:30.222 [2024-07-15 22:38:53.683396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.222 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.222 [2024-07-15 22:38:53.739681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.222 [2024-07-15 22:38:53.820871] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.222 [2024-07-15 22:38:53.820905] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.222 [2024-07-15 22:38:53.820912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.222 [2024-07-15 22:38:53.820918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.222 [2024-07-15 22:38:53.820923] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.222 [2024-07-15 22:38:53.820958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.222 [2024-07-15 22:38:53.821052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.222 [2024-07-15 22:38:53.821070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.222 [2024-07-15 22:38:53.821072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.788 22:38:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:30.788 22:38:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:30.788 22:38:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:30.788 [2024-07-15 22:38:54.651676] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.788 22:38:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:30.788 22:38:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:30.788 22:38:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:30.788 22:38:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:31.047 Malloc1 00:22:31.047 22:38:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:31.305 22:38:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:31.562 22:38:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:31.562 [2024-07-15 22:38:55.437894] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.562 22:38:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:31.821 22:38:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:32.096 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:32.096 fio-3.35 00:22:32.096 Starting 1 thread 00:22:32.096 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.634 00:22:34.634 test: (groupid=0, jobs=1): err= 0: pid=92304: Mon Jul 15 22:38:58 2024 00:22:34.634 read: IOPS=11.8k, BW=45.9MiB/s (48.2MB/s)(92.1MiB/2005msec) 00:22:34.634 slat (nsec): min=1575, max=374246, avg=1840.12, stdev=3321.69 00:22:34.634 clat (usec): min=3159, max=10420, avg=6026.10, stdev=467.99 00:22:34.634 lat (usec): min=3161, max=10422, avg=6027.94, stdev=468.07 00:22:34.634 clat percentiles (usec): 00:22:34.634 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:22:34.634 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6128], 00:22:34.634 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6718], 00:22:34.634 | 99.00th=[ 7046], 99.50th=[ 7308], 99.90th=[ 8979], 99.95th=[ 9372], 00:22:34.634 | 99.99th=[10028] 00:22:34.634 bw ( KiB/s): min=46000, max=47688, per=100.00%, avg=47050.00, stdev=754.90, samples=4 00:22:34.635 iops : min=11500, max=11922, avg=11762.50, stdev=188.72, samples=4 00:22:34.635 write: IOPS=11.7k, BW=45.7MiB/s (47.9MB/s)(91.6MiB/2005msec); 0 zone resets 00:22:34.635 slat (nsec): min=1642, max=363392, avg=1940.93, stdev=2606.32 00:22:34.635 clat (usec): min=2615, max=9376, avg=4842.08, stdev=400.63 00:22:34.635 lat (usec): min=2618, max=9377, avg=4844.02, stdev=400.77 00:22:34.635 clat percentiles (usec): 00:22:34.635 | 1.00th=[ 3949], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:22:34.635 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4948], 00:22:34.635 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:22:34.635 | 99.00th=[ 5735], 99.50th=[ 6063], 99.90th=[ 7963], 99.95th=[ 8848], 00:22:34.635 | 99.99th=[ 9372] 00:22:34.635 bw ( KiB/s): min=46440, max=47296, per=99.96%, avg=46762.00, stdev=383.02, samples=4 00:22:34.635 iops : min=11610, max=11824, avg=11690.50, stdev=95.75, samples=4 00:22:34.635 lat (msec) : 4=0.83%, 10=99.17%, 20=0.01% 00:22:34.635 cpu : usr=67.71%, sys=26.70%, ctx=97, majf=0, minf=6 00:22:34.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:34.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:34.635 issued rwts: total=23583,23450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.635 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:34.635 00:22:34.635 Run status group 0 (all jobs): 00:22:34.635 READ: bw=45.9MiB/s (48.2MB/s), 45.9MiB/s-45.9MiB/s (48.2MB/s-48.2MB/s), io=92.1MiB (96.6MB), run=2005-2005msec 00:22:34.635 WRITE: bw=45.7MiB/s (47.9MB/s), 45.7MiB/s-45.7MiB/s (47.9MB/s-47.9MB/s), io=91.6MiB (96.1MB), run=2005-2005msec 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:34.635 22:38:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:34.635 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:34.635 fio-3.35 00:22:34.635 Starting 1 thread 00:22:34.635 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.167 00:22:37.167 test: (groupid=0, jobs=1): err= 0: pid=92858: Mon Jul 15 22:39:00 2024 00:22:37.167 read: IOPS=10.2k, BW=160MiB/s (167MB/s)(320MiB/2005msec) 00:22:37.167 slat (nsec): min=2547, max=87176, avg=2897.49, stdev=1346.29 00:22:37.167 clat (usec): min=1907, max=53786, avg=7521.55, stdev=3436.71 00:22:37.167 lat (usec): min=1910, max=53789, avg=7524.44, stdev=3436.79 00:22:37.167 clat percentiles (usec): 00:22:37.167 | 1.00th=[ 3621], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5800], 00:22:37.167 | 30.00th=[ 6259], 40.00th=[ 6849], 50.00th=[ 7308], 60.00th=[ 7767], 00:22:37.167 | 70.00th=[ 8225], 80.00th=[ 8717], 90.00th=[ 9634], 95.00th=[10552], 00:22:37.167 | 99.00th=[12649], 99.50th=[13435], 99.90th=[52691], 99.95th=[53216], 00:22:37.167 | 99.99th=[53740] 00:22:37.167 bw ( KiB/s): min=72320, max=92064, per=50.58%, avg=82680.00, stdev=8115.58, samples=4 00:22:37.167 iops : min= 4520, max= 5754, avg=5167.50, stdev=507.22, samples=4 00:22:37.167 write: IOPS=6089, BW=95.1MiB/s (99.8MB/s)(169MiB/1778msec); 0 zone resets 00:22:37.167 slat (usec): min=29, max=350, avg=32.67, stdev= 7.37 00:22:37.167 clat (usec): min=4463, max=54011, avg=8720.26, stdev=2828.66 00:22:37.167 lat (usec): min=4499, max=54044, avg=8752.93, stdev=2829.66 00:22:37.167 clat percentiles (usec): 00:22:37.167 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 7308], 00:22:37.167 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:22:37.167 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11338], 00:22:37.167 | 99.00th=[13304], 99.50th=[14222], 99.90th=[53216], 99.95th=[53740], 00:22:37.167 | 99.99th=[53740] 00:22:37.167 bw ( KiB/s): min=75072, max=95232, per=88.33%, avg=86056.00, stdev=8366.78, samples=4 00:22:37.167 iops : min= 4692, max= 5952, avg=5378.50, stdev=522.92, samples=4 00:22:37.167 lat (msec) : 2=0.01%, 4=1.44%, 10=87.92%, 20=10.23%, 50=0.06% 00:22:37.167 lat (msec) : 100=0.34% 00:22:37.167 cpu : usr=84.63%, sys=13.57%, ctx=81, majf=0, minf=3 00:22:37.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:37.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:37.167 issued rwts: total=20484,10827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:37.167 00:22:37.167 Run status group 0 (all jobs): 00:22:37.167 READ: bw=160MiB/s (167MB/s), 160MiB/s-160MiB/s (167MB/s-167MB/s), io=320MiB (336MB), run=2005-2005msec 00:22:37.167 WRITE: bw=95.1MiB/s (99.8MB/s), 95.1MiB/s-95.1MiB/s (99.8MB/s-99.8MB/s), io=169MiB (177MB), run=1778-1778msec 00:22:37.167 22:39:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:37.167 rmmod nvme_tcp 00:22:37.167 rmmod nvme_fabrics 00:22:37.167 rmmod nvme_keyring 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 91902 ']' 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 91902 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 91902 ']' 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 91902 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.167 22:39:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91902 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91902' 00:22:37.426 killing process with pid 91902 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 91902 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 91902 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.426 22:39:01 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.963 22:39:03 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:39.963 00:22:39.963 real 0m14.798s 00:22:39.963 user 0m46.422s 00:22:39.963 sys 0m5.675s 00:22:39.963 22:39:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:39.963 22:39:03 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:39.963 ************************************ 00:22:39.963 END TEST nvmf_fio_host 00:22:39.963 ************************************ 00:22:39.963 22:39:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:39.963 22:39:03 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:39.963 22:39:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:39.963 22:39:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:39.963 22:39:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:39.963 ************************************ 00:22:39.963 START TEST nvmf_failover 00:22:39.963 ************************************ 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:39.963 * Looking for test storage... 00:22:39.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.963 22:39:03 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:39.964 22:39:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:45.279 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.279 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:45.280 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:45.280 Found net devices under 0000:86:00.0: cvl_0_0 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:45.280 Found net devices under 0000:86:00.1: cvl_0_1 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:45.280 22:39:08 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:45.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:22:45.280 00:22:45.280 --- 10.0.0.2 ping statistics --- 00:22:45.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.280 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:22:45.280 00:22:45.280 --- 10.0.0.1 ping statistics --- 00:22:45.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.280 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=97081 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 97081 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 97081 ']' 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.280 22:39:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:45.280 [2024-07-15 22:39:09.187038] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:22:45.280 [2024-07-15 22:39:09.187081] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.280 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.280 [2024-07-15 22:39:09.245276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:45.539 [2024-07-15 22:39:09.323935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.539 [2024-07-15 22:39:09.323976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.539 [2024-07-15 22:39:09.323983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.539 [2024-07-15 22:39:09.323989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.539 [2024-07-15 22:39:09.323994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.539 [2024-07-15 22:39:09.324182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.539 [2024-07-15 22:39:09.324249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.539 [2024-07-15 22:39:09.324251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.105 22:39:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:46.105 22:39:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:46.105 22:39:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:46.105 22:39:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:46.105 22:39:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:46.105 22:39:10 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.105 22:39:10 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:46.362 [2024-07-15 22:39:10.201385] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.362 22:39:10 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:46.647 Malloc0 00:22:46.647 22:39:10 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.905 22:39:10 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:46.905 22:39:10 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:47.163 [2024-07-15 22:39:10.962478] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.163 22:39:10 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:47.419 [2024-07-15 22:39:11.142940] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:47.419 22:39:11 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:47.419 [2024-07-15 22:39:11.323504] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:47.420 22:39:11 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=97597 00:22:47.420 22:39:11 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:47.420 22:39:11 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.420 22:39:11 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 97597 /var/tmp/bdevperf.sock 00:22:47.420 22:39:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 97597 ']' 00:22:47.420 22:39:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.420 22:39:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:47.420 22:39:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.420 22:39:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:47.420 22:39:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:48.352 22:39:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.352 22:39:12 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:48.352 22:39:12 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:48.610 NVMe0n1 00:22:48.610 22:39:12 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:48.870 00:22:48.870 22:39:12 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:48.870 22:39:12 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=97831 00:22:48.870 22:39:12 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:50.249 22:39:13 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:50.249 [2024-07-15 22:39:13.967171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967246] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967267] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967293] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967299] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967305] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967323] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967334] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967340] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967351] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967363] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967369] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967374] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967380] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967443] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967450] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967457] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967475] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967486] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967492] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967509] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967521] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967533] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967539] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967544] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967575] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967586] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967592] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967616] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 [2024-07-15 22:39:13.967622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7080 is same with the state(5) to be set 00:22:50.249 22:39:13 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:53.538 22:39:16 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:53.538 00:22:53.538 22:39:17 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:53.796 [2024-07-15 22:39:17.603694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603739] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603751] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603775] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603793] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603804] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603815] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.796 [2024-07-15 22:39:17.603822] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603828] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603834] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603845] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603851] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603875] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603887] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603893] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 [2024-07-15 22:39:17.603946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c8460 is same with the state(5) to be set 00:22:53.797 22:39:17 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:57.083 22:39:20 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.083 [2024-07-15 22:39:20.805780] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.083 22:39:20 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:58.018 22:39:21 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:58.277 [2024-07-15 22:39:22.028990] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb81ac0 is same with the state(5) to be set 00:22:58.277 [2024-07-15 22:39:22.029029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb81ac0 is same with the state(5) to be set 00:22:58.277 [2024-07-15 22:39:22.029037] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb81ac0 is same with the state(5) to be set 00:22:58.277 [2024-07-15 22:39:22.029043] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb81ac0 is same with the state(5) to be set 00:22:58.277 [2024-07-15 22:39:22.029050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb81ac0 is same with the state(5) to be set 00:22:58.277 [2024-07-15 22:39:22.029057] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb81ac0 is same with the state(5) to be set 00:22:58.277 [2024-07-15 22:39:22.029062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb81ac0 is same with the state(5) to be set 00:22:58.277 22:39:22 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 97831 00:23:04.845 0 00:23:04.845 22:39:27 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 97597 00:23:04.845 22:39:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 97597 ']' 00:23:04.845 22:39:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 97597 00:23:04.845 22:39:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:04.845 22:39:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:04.845 22:39:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97597 00:23:04.845 22:39:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:04.845 22:39:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:04.845 22:39:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97597' 00:23:04.845 killing process with pid 97597 00:23:04.845 22:39:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 97597 00:23:04.845 22:39:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 97597 00:23:04.845 22:39:28 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:04.845 [2024-07-15 22:39:11.397223] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:23:04.845 [2024-07-15 22:39:11.397288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97597 ] 00:23:04.845 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.845 [2024-07-15 22:39:11.449760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.845 [2024-07-15 22:39:11.529897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.845 Running I/O for 15 seconds... 00:23:04.845 [2024-07-15 22:39:13.969117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.845 [2024-07-15 22:39:13.969293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.845 [2024-07-15 22:39:13.969481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.845 [2024-07-15 22:39:13.969489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.846 [2024-07-15 22:39:13.969679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.846 [2024-07-15 22:39:13.969694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.846 [2024-07-15 22:39:13.969709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.846 [2024-07-15 22:39:13.969725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.846 [2024-07-15 22:39:13.969740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.846 [2024-07-15 22:39:13.969755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.846 [2024-07-15 22:39:13.969770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.969988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.969994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.970003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.970010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.970019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.970025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.970033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.970039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.970047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.970054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.970062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.970069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.970077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.970084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.970094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.970100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.970108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.970115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.846 [2024-07-15 22:39:13.970123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.846 [2024-07-15 22:39:13.970130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.847 [2024-07-15 22:39:13.970758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.847 [2024-07-15 22:39:13.970766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.848 [2024-07-15 22:39:13.970772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.970780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.848 [2024-07-15 22:39:13.970787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.970794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.848 [2024-07-15 22:39:13.970801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.970808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.848 [2024-07-15 22:39:13.970816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.970824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.848 [2024-07-15 22:39:13.970830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.970838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.848 [2024-07-15 22:39:13.970845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.970867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.970874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98280 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.970881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.970891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.970896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.970902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98288 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.970908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.970914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.970919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.970924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98296 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.970930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.970937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.970942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.970948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.970954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.970961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.970966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.970971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98312 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.970977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.970983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.970989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.970995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98320 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.971001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.971008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.971013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.971018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98328 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.971024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.971031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.971036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.971042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98336 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.971050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.971056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.971061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.971068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98344 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.971074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.971080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.971085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.971090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98352 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.971097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.971104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.971109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.971114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98360 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.971120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.971126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.971131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.971136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98368 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.971143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.971150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.971155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.971160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98376 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.971166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.971172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.971177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.971182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98384 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.971188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.981897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.981909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.981917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98392 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.981927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.981937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.848 [2024-07-15 22:39:13.981944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.848 [2024-07-15 22:39:13.981954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98400 len:8 PRP1 0x0 PRP2 0x0 00:23:04.848 [2024-07-15 22:39:13.981963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.982011] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x197c300 was disconnected and freed. reset controller. 00:23:04.848 [2024-07-15 22:39:13.982022] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:04.848 [2024-07-15 22:39:13.982050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.848 [2024-07-15 22:39:13.982061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.982071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.848 [2024-07-15 22:39:13.982080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.982089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.848 [2024-07-15 22:39:13.982099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.982110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.848 [2024-07-15 22:39:13.982119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:13.982128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:04.848 [2024-07-15 22:39:13.982173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195e540 (9): Bad file descriptor 00:23:04.848 [2024-07-15 22:39:13.986060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:04.848 [2024-07-15 22:39:14.103332] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:04.848 [2024-07-15 22:39:17.604912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.848 [2024-07-15 22:39:17.604948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:17.604964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.848 [2024-07-15 22:39:17.604972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:17.604981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.848 [2024-07-15 22:39:17.604988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.848 [2024-07-15 22:39:17.604996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.848 [2024-07-15 22:39:17.605003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.849 [2024-07-15 22:39:17.605510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.849 [2024-07-15 22:39:17.605525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.849 [2024-07-15 22:39:17.605533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.849 [2024-07-15 22:39:17.605539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.605988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.605995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.606009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.606024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.606039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.606053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.606068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.606083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.606097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.606110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.606125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.606139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.606156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.850 [2024-07-15 22:39:17.606171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.850 [2024-07-15 22:39:17.606180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.851 [2024-07-15 22:39:17.606310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.851 [2024-07-15 22:39:17.606325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.851 [2024-07-15 22:39:17.606342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.851 [2024-07-15 22:39:17.606358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.851 [2024-07-15 22:39:17.606373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.851 [2024-07-15 22:39:17.606390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.851 [2024-07-15 22:39:17.606408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.851 [2024-07-15 22:39:17.606424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.851 [2024-07-15 22:39:17.606818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.851 [2024-07-15 22:39:17.606833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.851 [2024-07-15 22:39:17.606841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:17.606847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:17.606855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:17.606862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:17.606882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.852 [2024-07-15 22:39:17.606889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.852 [2024-07-15 22:39:17.606894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55512 len:8 PRP1 0x0 PRP2 0x0 00:23:04.852 [2024-07-15 22:39:17.606901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:17.606944] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b29380 was disconnected and freed. reset controller. 00:23:04.852 [2024-07-15 22:39:17.606953] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:04.852 [2024-07-15 22:39:17.606973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.852 [2024-07-15 22:39:17.606980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:17.606989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.852 [2024-07-15 22:39:17.606995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:17.607004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.852 [2024-07-15 22:39:17.607010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:17.607017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.852 [2024-07-15 22:39:17.607025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:17.607031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:04.852 [2024-07-15 22:39:17.607053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195e540 (9): Bad file descriptor 00:23:04.852 [2024-07-15 22:39:17.609878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:04.852 [2024-07-15 22:39:17.718873] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:04.852 [2024-07-15 22:39:22.029409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.852 [2024-07-15 22:39:22.029874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.852 [2024-07-15 22:39:22.029882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.029888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.029896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.029902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.029910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.029917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.029924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.029932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.029940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.029946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.029954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.029962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.029970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.029977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.029985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.029991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.853 [2024-07-15 22:39:22.030333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.853 [2024-07-15 22:39:22.030347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.853 [2024-07-15 22:39:22.030363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.853 [2024-07-15 22:39:22.030378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.853 [2024-07-15 22:39:22.030392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.853 [2024-07-15 22:39:22.030407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.853 [2024-07-15 22:39:22.030421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.853 [2024-07-15 22:39:22.030436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.853 [2024-07-15 22:39:22.030450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.853 [2024-07-15 22:39:22.030457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.030991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.030999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.031005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.031013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.031019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.031028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.854 [2024-07-15 22:39:22.031035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.854 [2024-07-15 22:39:22.031067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88016 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88024 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88032 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88040 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88048 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88056 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88064 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88072 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88080 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88088 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88096 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88104 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88112 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88120 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88128 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88136 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88144 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88152 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88160 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88168 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87616 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.855 [2024-07-15 22:39:22.031586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.855 [2024-07-15 22:39:22.031592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87624 len:8 PRP1 0x0 PRP2 0x0 00:23:04.855 [2024-07-15 22:39:22.031599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.031641] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b29170 was disconnected and freed. reset controller. 00:23:04.855 [2024-07-15 22:39:22.031650] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:04.855 [2024-07-15 22:39:22.031670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.855 [2024-07-15 22:39:22.031677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.042095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.855 [2024-07-15 22:39:22.042108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.042118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.855 [2024-07-15 22:39:22.042126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.042135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.855 [2024-07-15 22:39:22.042142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.855 [2024-07-15 22:39:22.042150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:04.855 [2024-07-15 22:39:22.042185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x195e540 (9): Bad file descriptor 00:23:04.855 [2024-07-15 22:39:22.045511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:04.855 [2024-07-15 22:39:22.072975] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:04.855 00:23:04.856 Latency(us) 00:23:04.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.856 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:04.856 Verification LBA range: start 0x0 length 0x4000 00:23:04.856 NVMe0n1 : 15.01 10897.94 42.57 753.60 0.00 10963.59 648.24 21199.47 00:23:04.856 =================================================================================================================== 00:23:04.856 Total : 10897.94 42.57 753.60 0.00 10963.59 648.24 21199.47 00:23:04.856 Received shutdown signal, test time was about 15.000000 seconds 00:23:04.856 00:23:04.856 Latency(us) 00:23:04.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.856 =================================================================================================================== 00:23:04.856 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.856 22:39:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:04.856 22:39:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:04.856 22:39:28 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:04.856 22:39:28 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=100354 00:23:04.856 22:39:28 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:04.856 22:39:28 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 100354 /var/tmp/bdevperf.sock 00:23:04.856 22:39:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 100354 ']' 00:23:04.856 22:39:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.856 22:39:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.856 22:39:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.856 22:39:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.856 22:39:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:05.165 22:39:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:05.165 22:39:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:05.165 22:39:29 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:05.423 [2024-07-15 22:39:29.198448] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:05.423 22:39:29 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:05.423 [2024-07-15 22:39:29.382982] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:05.682 22:39:29 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:05.941 NVMe0n1 00:23:05.941 22:39:29 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:06.200 00:23:06.200 22:39:29 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:06.459 00:23:06.459 22:39:30 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:06.459 22:39:30 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:06.459 22:39:30 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:06.718 22:39:30 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:10.011 22:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:10.011 22:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:10.011 22:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:10.011 22:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=101280 00:23:10.011 22:39:33 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 101280 00:23:10.947 0 00:23:10.947 22:39:34 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:10.947 [2024-07-15 22:39:28.220357] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:23:10.947 [2024-07-15 22:39:28.220406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100354 ] 00:23:10.947 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.948 [2024-07-15 22:39:28.273744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.948 [2024-07-15 22:39:28.343947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.948 [2024-07-15 22:39:30.559348] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:10.948 [2024-07-15 22:39:30.559404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.948 [2024-07-15 22:39:30.559415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.948 [2024-07-15 22:39:30.559424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.948 [2024-07-15 22:39:30.559431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.948 [2024-07-15 22:39:30.559439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.948 [2024-07-15 22:39:30.559445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.948 [2024-07-15 22:39:30.559452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.948 [2024-07-15 22:39:30.559458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.948 [2024-07-15 22:39:30.559465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.948 [2024-07-15 22:39:30.559494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b2540 (9): Bad file descriptor 00:23:10.948 [2024-07-15 22:39:30.559508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.948 [2024-07-15 22:39:30.563638] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:10.948 Running I/O for 1 seconds... 00:23:10.948 00:23:10.948 Latency(us) 00:23:10.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.948 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:10.948 Verification LBA range: start 0x0 length 0x4000 00:23:10.948 NVMe0n1 : 1.00 10944.53 42.75 0.00 0.00 11644.03 794.27 11625.52 00:23:10.948 =================================================================================================================== 00:23:10.948 Total : 10944.53 42.75 0.00 0.00 11644.03 794.27 11625.52 00:23:10.948 22:39:34 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:10.948 22:39:34 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:11.206 22:39:35 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:11.466 22:39:35 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:11.466 22:39:35 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:11.466 22:39:35 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:11.725 22:39:35 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 100354 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 100354 ']' 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 100354 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100354 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100354' 00:23:15.018 killing process with pid 100354 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 100354 00:23:15.018 22:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 100354 00:23:15.277 22:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:15.277 22:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.277 22:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:15.277 22:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:15.277 22:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:15.277 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.277 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:15.277 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.277 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:15.277 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.277 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.277 rmmod nvme_tcp 00:23:15.277 rmmod nvme_fabrics 00:23:15.277 rmmod nvme_keyring 00:23:15.535 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.535 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:15.535 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:15.535 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 97081 ']' 00:23:15.536 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 97081 00:23:15.536 22:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 97081 ']' 00:23:15.536 22:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 97081 00:23:15.536 22:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:15.536 22:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.536 22:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97081 00:23:15.536 22:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:15.536 22:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:15.536 22:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97081' 00:23:15.536 killing process with pid 97081 00:23:15.536 22:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 97081 00:23:15.536 22:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 97081 00:23:15.794 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:15.794 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:15.794 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:15.794 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.794 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.794 22:39:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.794 22:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.794 22:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.694 22:39:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.694 00:23:17.694 real 0m38.065s 00:23:17.694 user 2m2.519s 00:23:17.694 sys 0m7.384s 00:23:17.694 22:39:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:17.694 22:39:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.694 ************************************ 00:23:17.694 END TEST nvmf_failover 00:23:17.694 ************************************ 00:23:17.694 22:39:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:17.694 22:39:41 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:17.694 22:39:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:17.694 22:39:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:17.694 22:39:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:17.694 ************************************ 00:23:17.694 START TEST nvmf_host_discovery 00:23:17.694 ************************************ 00:23:17.694 22:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:17.953 * Looking for test storage... 00:23:17.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.953 22:39:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.954 22:39:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:23.228 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:23.228 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:23.228 Found net devices under 0000:86:00.0: cvl_0_0 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:23.228 Found net devices under 0000:86:00.1: cvl_0_1 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:23:23.228 00:23:23.228 --- 10.0.0.2 ping statistics --- 00:23:23.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.228 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:23:23.228 00:23:23.228 --- 10.0.0.1 ping statistics --- 00:23:23.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.228 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.228 22:39:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.229 22:39:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.229 22:39:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=105497 00:23:23.229 22:39:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 105497 00:23:23.229 22:39:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:23.229 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 105497 ']' 00:23:23.229 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.229 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.229 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.229 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.229 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:23.229 [2024-07-15 22:39:47.052917] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:23:23.229 [2024-07-15 22:39:47.052964] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.229 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.229 [2024-07-15 22:39:47.112101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.229 [2024-07-15 22:39:47.191567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.229 [2024-07-15 22:39:47.191602] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.229 [2024-07-15 22:39:47.191609] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.229 [2024-07-15 22:39:47.191615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.229 [2024-07-15 22:39:47.191621] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.229 [2024-07-15 22:39:47.191650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.163 [2024-07-15 22:39:47.894968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.163 [2024-07-15 22:39:47.903082] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.163 null0 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.163 null1 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=105742 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 105742 /tmp/host.sock 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 105742 ']' 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:24.163 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.163 22:39:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:24.163 [2024-07-15 22:39:47.979565] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:23:24.163 [2024-07-15 22:39:47.979606] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105742 ] 00:23:24.163 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.163 [2024-07-15 22:39:48.033302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.163 [2024-07-15 22:39:48.112849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.098 22:39:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.098 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:25.098 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:25.098 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.098 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.098 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.099 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:25.099 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.099 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.099 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.099 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.099 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.099 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.099 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.099 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.356 [2024-07-15 22:39:49.122316] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.356 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:25.357 22:39:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:25.987 [2024-07-15 22:39:49.812619] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:25.987 [2024-07-15 22:39:49.812640] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:25.987 [2024-07-15 22:39:49.812657] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:26.246 [2024-07-15 22:39:49.939055] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:26.246 [2024-07-15 22:39:50.077433] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:26.246 [2024-07-15 22:39:50.077454] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.505 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.765 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.765 [2024-07-15 22:39:50.614359] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:26.765 [2024-07-15 22:39:50.614924] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:26.765 [2024-07-15 22:39:50.614946] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:26.766 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.025 [2024-07-15 22:39:50.744328] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:27.025 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:27.025 22:39:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:27.025 [2024-07-15 22:39:50.807885] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:27.025 [2024-07-15 22:39:50.807901] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:27.025 [2024-07-15 22:39:50.807906] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.959 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.959 [2024-07-15 22:39:51.873968] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:27.959 [2024-07-15 22:39:51.873989] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:27.959 [2024-07-15 22:39:51.878411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.959 [2024-07-15 22:39:51.878428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.959 [2024-07-15 22:39:51.878437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.959 [2024-07-15 22:39:51.878444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.959 [2024-07-15 22:39:51.878452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.960 [2024-07-15 22:39:51.878459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.960 [2024-07-15 22:39:51.878466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:27.960 [2024-07-15 22:39:51.878472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:27.960 [2024-07-15 22:39:51.878479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2445f10 is same with the state(5) to be set 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:27.960 [2024-07-15 22:39:51.888426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2445f10 (9): Bad file descriptor 00:23:27.960 [2024-07-15 22:39:51.898464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:27.960 [2024-07-15 22:39:51.898696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:27.960 [2024-07-15 22:39:51.898711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2445f10 with addr=10.0.0.2, port=4420 00:23:27.960 [2024-07-15 22:39:51.898720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2445f10 is same with the state(5) to be set 00:23:27.960 [2024-07-15 22:39:51.898732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2445f10 (9): Bad file descriptor 00:23:27.960 [2024-07-15 22:39:51.898742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:27.960 [2024-07-15 22:39:51.898749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:27.960 [2024-07-15 22:39:51.898759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:27.960 [2024-07-15 22:39:51.898769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:27.960 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.960 [2024-07-15 22:39:51.908518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:27.960 [2024-07-15 22:39:51.908808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:27.960 [2024-07-15 22:39:51.908821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2445f10 with addr=10.0.0.2, port=4420 00:23:27.960 [2024-07-15 22:39:51.908828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2445f10 is same with the state(5) to be set 00:23:27.960 [2024-07-15 22:39:51.908838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2445f10 (9): Bad file descriptor 00:23:27.960 [2024-07-15 22:39:51.908849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:27.960 [2024-07-15 22:39:51.908855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:27.960 [2024-07-15 22:39:51.908862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:27.960 [2024-07-15 22:39:51.908872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:27.960 [2024-07-15 22:39:51.918568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:27.960 [2024-07-15 22:39:51.918880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:27.960 [2024-07-15 22:39:51.918894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2445f10 with addr=10.0.0.2, port=4420 00:23:27.960 [2024-07-15 22:39:51.918902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2445f10 is same with the state(5) to be set 00:23:27.960 [2024-07-15 22:39:51.918912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2445f10 (9): Bad file descriptor 00:23:27.960 [2024-07-15 22:39:51.918922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:27.960 [2024-07-15 22:39:51.918928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:27.960 [2024-07-15 22:39:51.918936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:27.960 [2024-07-15 22:39:51.918945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:27.960 [2024-07-15 22:39:51.928623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:27.960 [2024-07-15 22:39:51.928812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:27.960 [2024-07-15 22:39:51.928825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2445f10 with addr=10.0.0.2, port=4420 00:23:27.960 [2024-07-15 22:39:51.928833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2445f10 is same with the state(5) to be set 00:23:27.960 [2024-07-15 22:39:51.928843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2445f10 (9): Bad file descriptor 00:23:27.960 [2024-07-15 22:39:51.928853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:27.960 [2024-07-15 22:39:51.928860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:27.960 [2024-07-15 22:39:51.928867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:27.960 [2024-07-15 22:39:51.928876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.219 [2024-07-15 22:39:51.939174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.219 [2024-07-15 22:39:51.939502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.219 [2024-07-15 22:39:51.939518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2445f10 with addr=10.0.0.2, port=4420 00:23:28.219 [2024-07-15 22:39:51.939525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2445f10 is same with the state(5) to be set 00:23:28.219 [2024-07-15 22:39:51.939537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2445f10 (9): Bad file descriptor 00:23:28.219 [2024-07-15 22:39:51.939546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.219 [2024-07-15 22:39:51.939553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.219 [2024-07-15 22:39:51.939560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.219 [2024-07-15 22:39:51.939569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.219 [2024-07-15 22:39:51.949231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.219 [2024-07-15 22:39:51.949527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.219 [2024-07-15 22:39:51.949540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2445f10 with addr=10.0.0.2, port=4420 00:23:28.219 [2024-07-15 22:39:51.949547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2445f10 is same with the state(5) to be set 00:23:28.219 [2024-07-15 22:39:51.949557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2445f10 (9): Bad file descriptor 00:23:28.219 [2024-07-15 22:39:51.949568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.219 [2024-07-15 22:39:51.949575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.219 [2024-07-15 22:39:51.949583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.219 [2024-07-15 22:39:51.949592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.219 [2024-07-15 22:39:51.959282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:28.219 [2024-07-15 22:39:51.959403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.219 [2024-07-15 22:39:51.959417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2445f10 with addr=10.0.0.2, port=4420 00:23:28.219 [2024-07-15 22:39:51.959428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2445f10 is same with the state(5) to be set 00:23:28.219 [2024-07-15 22:39:51.959438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2445f10 (9): Bad file descriptor 00:23:28.219 [2024-07-15 22:39:51.959448] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:28.219 [2024-07-15 22:39:51.959455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:28.219 [2024-07-15 22:39:51.959462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:28.219 [2024-07-15 22:39:51.959472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.219 [2024-07-15 22:39:51.961251] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:28.219 [2024-07-15 22:39:51.961266] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:28.219 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:28.220 22:39:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:28.220 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.479 22:39:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.416 [2024-07-15 22:39:53.301399] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:29.416 [2024-07-15 22:39:53.301416] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:29.416 [2024-07-15 22:39:53.301428] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:29.674 [2024-07-15 22:39:53.387698] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:29.674 [2024-07-15 22:39:53.610443] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:29.674 [2024-07-15 22:39:53.610468] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.674 request: 00:23:29.674 { 00:23:29.674 "name": "nvme", 00:23:29.674 "trtype": "tcp", 00:23:29.674 "traddr": "10.0.0.2", 00:23:29.674 "adrfam": "ipv4", 00:23:29.674 "trsvcid": "8009", 00:23:29.674 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:29.674 "wait_for_attach": true, 00:23:29.674 "method": "bdev_nvme_start_discovery", 00:23:29.674 "req_id": 1 00:23:29.674 } 00:23:29.674 Got JSON-RPC error response 00:23:29.674 response: 00:23:29.674 { 00:23:29.674 "code": -17, 00:23:29.674 "message": "File exists" 00:23:29.674 } 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:29.674 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.934 request: 00:23:29.934 { 00:23:29.934 "name": "nvme_second", 00:23:29.934 "trtype": "tcp", 00:23:29.934 "traddr": "10.0.0.2", 00:23:29.934 "adrfam": "ipv4", 00:23:29.934 "trsvcid": "8009", 00:23:29.934 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:29.934 "wait_for_attach": true, 00:23:29.934 "method": "bdev_nvme_start_discovery", 00:23:29.934 "req_id": 1 00:23:29.934 } 00:23:29.934 Got JSON-RPC error response 00:23:29.934 response: 00:23:29.934 { 00:23:29.934 "code": -17, 00:23:29.934 "message": "File exists" 00:23:29.934 } 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.934 22:39:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:31.311 [2024-07-15 22:39:54.849926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:31.311 [2024-07-15 22:39:54.849953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2482a00 with addr=10.0.0.2, port=8010 00:23:31.311 [2024-07-15 22:39:54.849965] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:31.311 [2024-07-15 22:39:54.849971] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:31.311 [2024-07-15 22:39:54.849977] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:32.247 [2024-07-15 22:39:55.852334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.247 [2024-07-15 22:39:55.852360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2482a00 with addr=10.0.0.2, port=8010 00:23:32.247 [2024-07-15 22:39:55.852371] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:32.247 [2024-07-15 22:39:55.852377] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:32.247 [2024-07-15 22:39:55.852384] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:33.183 [2024-07-15 22:39:56.854531] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:33.183 request: 00:23:33.183 { 00:23:33.183 "name": "nvme_second", 00:23:33.183 "trtype": "tcp", 00:23:33.183 "traddr": "10.0.0.2", 00:23:33.183 "adrfam": "ipv4", 00:23:33.183 "trsvcid": "8010", 00:23:33.183 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:33.183 "wait_for_attach": false, 00:23:33.183 "attach_timeout_ms": 3000, 00:23:33.183 "method": "bdev_nvme_start_discovery", 00:23:33.183 "req_id": 1 00:23:33.183 } 00:23:33.183 Got JSON-RPC error response 00:23:33.183 response: 00:23:33.183 { 00:23:33.183 "code": -110, 00:23:33.183 "message": "Connection timed out" 00:23:33.183 } 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 105742 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.183 rmmod nvme_tcp 00:23:33.183 rmmod nvme_fabrics 00:23:33.183 rmmod nvme_keyring 00:23:33.183 22:39:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.184 22:39:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:33.184 22:39:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:33.184 22:39:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 105497 ']' 00:23:33.184 22:39:56 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 105497 00:23:33.184 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 105497 ']' 00:23:33.184 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 105497 00:23:33.184 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:33.184 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.184 22:39:56 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 105497 00:23:33.184 22:39:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:33.184 22:39:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:33.184 22:39:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 105497' 00:23:33.184 killing process with pid 105497 00:23:33.184 22:39:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 105497 00:23:33.184 22:39:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 105497 00:23:33.442 22:39:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.442 22:39:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.442 22:39:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.442 22:39:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.442 22:39:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.442 22:39:57 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.442 22:39:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.442 22:39:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.388 22:39:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:35.388 00:23:35.388 real 0m17.640s 00:23:35.388 user 0m22.400s 00:23:35.388 sys 0m5.274s 00:23:35.388 22:39:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:35.388 22:39:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 ************************************ 00:23:35.388 END TEST nvmf_host_discovery 00:23:35.388 ************************************ 00:23:35.388 22:39:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:35.388 22:39:59 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:35.388 22:39:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:35.388 22:39:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:35.388 22:39:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:35.388 ************************************ 00:23:35.388 START TEST nvmf_host_multipath_status 00:23:35.388 ************************************ 00:23:35.388 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:35.647 * Looking for test storage... 00:23:35.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.647 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.648 22:39:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:41.062 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:41.062 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:41.062 Found net devices under 0000:86:00.0: cvl_0_0 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:41.062 Found net devices under 0000:86:00.1: cvl_0_1 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:41.062 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:41.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:23:41.063 00:23:41.063 --- 10.0.0.2 ping statistics --- 00:23:41.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.063 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:23:41.063 00:23:41.063 --- 10.0.0.1 ping statistics --- 00:23:41.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.063 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=110816 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 110816 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 110816 ']' 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.063 22:40:04 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:41.063 [2024-07-15 22:40:04.702630] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:23:41.063 [2024-07-15 22:40:04.702673] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.063 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.063 [2024-07-15 22:40:04.760839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:41.063 [2024-07-15 22:40:04.842474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.063 [2024-07-15 22:40:04.842506] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.063 [2024-07-15 22:40:04.842514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.063 [2024-07-15 22:40:04.842519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.063 [2024-07-15 22:40:04.842524] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.063 [2024-07-15 22:40:04.842570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.063 [2024-07-15 22:40:04.842573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.630 22:40:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.630 22:40:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:41.630 22:40:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.630 22:40:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:41.630 22:40:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:41.630 22:40:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.630 22:40:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=110816 00:23:41.630 22:40:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:41.888 [2024-07-15 22:40:05.703176] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.888 22:40:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:42.147 Malloc0 00:23:42.147 22:40:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:42.147 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:42.407 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:42.666 [2024-07-15 22:40:06.420917] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:42.666 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:42.666 [2024-07-15 22:40:06.609427] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:42.926 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=111078 00:23:42.926 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:42.926 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 111078 /var/tmp/bdevperf.sock 00:23:42.926 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:42.926 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 111078 ']' 00:23:42.926 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.926 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:42.926 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.926 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:42.926 22:40:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:43.863 22:40:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:43.863 22:40:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:43.863 22:40:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:43.863 22:40:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:44.122 Nvme0n1 00:23:44.122 22:40:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:44.381 Nvme0n1 00:23:44.381 22:40:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:44.381 22:40:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:46.915 22:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:46.915 22:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:46.915 22:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:46.915 22:40:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:47.851 22:40:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:47.851 22:40:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:47.851 22:40:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.851 22:40:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:48.110 22:40:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.110 22:40:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:48.110 22:40:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.110 22:40:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:48.110 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:48.110 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:48.110 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.110 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:48.369 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.369 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:48.369 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.369 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:48.628 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.628 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:48.628 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.628 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:48.628 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.628 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:48.628 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.628 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:48.888 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.888 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:48.888 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:49.147 22:40:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:49.405 22:40:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:50.340 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:50.340 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:50.340 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.340 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:50.599 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:50.599 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:50.599 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.599 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:50.599 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.599 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:50.599 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.599 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:50.858 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.858 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:50.858 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.858 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.117 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.117 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:51.117 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.117 22:40:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.376 22:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.376 22:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:51.376 22:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.376 22:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:51.376 22:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.376 22:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:51.376 22:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:51.635 22:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:51.894 22:40:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:52.828 22:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:52.828 22:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:52.828 22:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:52.828 22:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:53.087 22:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.087 22:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:53.087 22:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.087 22:40:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:53.087 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.087 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:53.087 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.087 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:53.345 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.345 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:53.345 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.345 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:53.604 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.604 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:53.604 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.604 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:53.861 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.861 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:53.861 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.861 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:53.861 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.861 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:53.861 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:54.119 22:40:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:54.377 22:40:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:55.313 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:55.313 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:55.313 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.314 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:55.572 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.572 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:55.572 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.572 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:55.831 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.831 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:55.831 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.831 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:55.831 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.831 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:55.831 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.831 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:56.089 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.089 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:56.089 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.089 22:40:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:56.347 22:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.347 22:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:56.347 22:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.347 22:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:56.347 22:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:56.347 22:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:56.347 22:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:56.605 22:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:56.863 22:40:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:57.852 22:40:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:57.852 22:40:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:57.852 22:40:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.852 22:40:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:58.110 22:40:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.110 22:40:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:58.110 22:40:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.110 22:40:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:58.110 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.110 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:58.110 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.110 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:58.367 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.367 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:58.367 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.367 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:58.625 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:58.625 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:58.625 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.625 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:58.625 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.625 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:58.625 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:58.625 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:58.883 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:58.883 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:58.883 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:59.140 22:40:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:59.140 22:40:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:00.539 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:00.539 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:00.539 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.539 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:00.539 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.539 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:00.539 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.539 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:00.539 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.539 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:00.539 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.539 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:00.799 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.799 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:00.799 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.799 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.058 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.058 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:01.058 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.058 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:01.058 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:01.058 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:01.058 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:01.058 22:40:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:01.316 22:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:01.316 22:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:01.575 22:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:01.575 22:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:01.833 22:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:01.833 22:40:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:03.209 22:40:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:03.209 22:40:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:03.209 22:40:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.209 22:40:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:03.209 22:40:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.209 22:40:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:03.209 22:40:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.209 22:40:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:03.209 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.209 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:03.209 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.209 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:03.468 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.468 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:03.468 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.468 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:03.726 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.726 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:03.726 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.726 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:03.726 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.726 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:03.726 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:03.726 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:03.985 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:03.985 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:03.985 22:40:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:04.243 22:40:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:04.502 22:40:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:05.439 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:05.439 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:05.439 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.439 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:05.721 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:05.721 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:05.721 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.721 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:05.721 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:05.721 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:05.721 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:05.721 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:06.014 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.014 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:06.014 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.014 22:40:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:06.274 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.274 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:06.274 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.274 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:06.534 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.534 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:06.534 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:06.534 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:06.534 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:06.534 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:06.534 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:06.793 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:07.052 22:40:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:07.988 22:40:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:07.988 22:40:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:07.989 22:40:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:07.989 22:40:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:08.248 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.248 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:08.248 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.248 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:08.248 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.248 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:08.248 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.248 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:08.531 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.531 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:08.531 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.531 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:08.790 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.790 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:08.790 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.790 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:08.790 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:08.790 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:08.790 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:08.790 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:09.049 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:09.049 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:09.049 22:40:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:09.308 22:40:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:09.567 22:40:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:10.502 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:10.502 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:10.502 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.502 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:10.761 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:10.761 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:10.761 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.761 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:10.761 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:10.761 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:10.761 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:10.761 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:11.019 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.019 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:11.019 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.020 22:40:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:11.279 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.279 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:11.279 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.279 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:11.279 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:11.279 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:11.279 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:11.279 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:11.538 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:11.538 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 111078 00:24:11.538 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 111078 ']' 00:24:11.538 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 111078 00:24:11.538 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:11.538 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.538 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111078 00:24:11.538 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:11.538 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:11.538 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111078' 00:24:11.538 killing process with pid 111078 00:24:11.538 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 111078 00:24:11.538 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 111078 00:24:11.820 Connection closed with partial response: 00:24:11.820 00:24:11.820 00:24:11.820 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 111078 00:24:11.821 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:11.821 [2024-07-15 22:40:06.684235] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:24:11.821 [2024-07-15 22:40:06.684287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111078 ] 00:24:11.821 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.821 [2024-07-15 22:40:06.733976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.821 [2024-07-15 22:40:06.812802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.821 Running I/O for 90 seconds... 00:24:11.821 [2024-07-15 22:40:20.469027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.469694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.469700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.470434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.470445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.470461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.470469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.470481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.821 [2024-07-15 22:40:20.470489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.821 [2024-07-15 22:40:20.470503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.470980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.470987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.822 [2024-07-15 22:40:20.471463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.822 [2024-07-15 22:40:20.471475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.471841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.471848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.823 [2024-07-15 22:40:20.472548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.823 [2024-07-15 22:40:20.472567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.823 [2024-07-15 22:40:20.472586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.823 [2024-07-15 22:40:20.472605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.823 [2024-07-15 22:40:20.472624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.823 [2024-07-15 22:40:20.472643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.823 [2024-07-15 22:40:20.472663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.823 [2024-07-15 22:40:20.472681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.823 [2024-07-15 22:40:20.472693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.824 [2024-07-15 22:40:20.472701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.824 [2024-07-15 22:40:20.472720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.824 [2024-07-15 22:40:20.472739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.824 [2024-07-15 22:40:20.472760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.824 [2024-07-15 22:40:20.472780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.824 [2024-07-15 22:40:20.472799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.824 [2024-07-15 22:40:20.472818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.824 [2024-07-15 22:40:20.472837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.472856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.824 [2024-07-15 22:40:20.472875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.472894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.472913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.472932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.472951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.472970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.472986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.472993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.824 [2024-07-15 22:40:20.473950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.824 [2024-07-15 22:40:20.473962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.473968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.473981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.473988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.474390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.474397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.485576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.485586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.485600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.485607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.485619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.485625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.485638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.485644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.485657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.485663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.485675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.485682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.485694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.485701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.485713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.485720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.485732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.825 [2024-07-15 22:40:20.485739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.825 [2024-07-15 22:40:20.485751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.485757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.485771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.485778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.485790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.485798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.485812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.485818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.485830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.485837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.485849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.485856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.485868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.485875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.486986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.826 [2024-07-15 22:40:20.486993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.826 [2024-07-15 22:40:20.487005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.827 [2024-07-15 22:40:20.487415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.827 [2024-07-15 22:40:20.487798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.827 [2024-07-15 22:40:20.487810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.487817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.487830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.487836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.488991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.488998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.828 [2024-07-15 22:40:20.489290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.828 [2024-07-15 22:40:20.489297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.489309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.489316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.489328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.489335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.489347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.489354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.489369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.489376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.489388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.489395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.489408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.489416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.489430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.489440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.489454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.489462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.829 [2024-07-15 22:40:20.490702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.829 [2024-07-15 22:40:20.490710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.490723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.490729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.490742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.490748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.490760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.490767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.490779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.490786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.490798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.490805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.490817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.490824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.490836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.490843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.490855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.490862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.490874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.490881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.490893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.490900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.490912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.490919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.496796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.496805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.496821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.496827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.496840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.496847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.496859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.496866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.496878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.496885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.496898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.496905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.496917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.496924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.496936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.496943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.496955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.496962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.496974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.496981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.496993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.497000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.497019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.497038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.497058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.497077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.497096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.497114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.497133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.830 [2024-07-15 22:40:20.497152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.497171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.497190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.497209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.497232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.497251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.497676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.497701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.497720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.830 [2024-07-15 22:40:20.497739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.830 [2024-07-15 22:40:20.497751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.497770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.497789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.497808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.497827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.497846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.497865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.497884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.497903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.497923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.497944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.497963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.497982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.497989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.831 [2024-07-15 22:40:20.498540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.831 [2024-07-15 22:40:20.498547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.498986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.498993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.499985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.499992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.500005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.500012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.500024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.500031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.832 [2024-07-15 22:40:20.500045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.832 [2024-07-15 22:40:20.500052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.833 [2024-07-15 22:40:20.500706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.833 [2024-07-15 22:40:20.500725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.833 [2024-07-15 22:40:20.500737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.500746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.500759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.500765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.500777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.500784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.500796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.500803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.501985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.501998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.834 [2024-07-15 22:40:20.502383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.834 [2024-07-15 22:40:20.502390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.502889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.502896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.835 [2024-07-15 22:40:20.503609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.835 [2024-07-15 22:40:20.503616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.503984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.503996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.504003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.504022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.504041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.504059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.504078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.504097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.504117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.836 [2024-07-15 22:40:20.504135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.836 [2024-07-15 22:40:20.504846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.836 [2024-07-15 22:40:20.504852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.504865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.504871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.504884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.837 [2024-07-15 22:40:20.504891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.504903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.504909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.504922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.504929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.504941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.504948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.837 [2024-07-15 22:40:20.505774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.837 [2024-07-15 22:40:20.505781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.505795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.505802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.505815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.505822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.505835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.505844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.505856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.505863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.505875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.505882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.505894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.505901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.505913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.505920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.505932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.505939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.505951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.505958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.505970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.505977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.505989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.505996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.509982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.509995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.510002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.510014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.510021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.510033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.510042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.510495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.510511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.510526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.510533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.510545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.510552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.510564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.510571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.510584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.838 [2024-07-15 22:40:20.510591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.838 [2024-07-15 22:40:20.510604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.510989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.510996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.839 [2024-07-15 22:40:20.511328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.839 [2024-07-15 22:40:20.511346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.839 [2024-07-15 22:40:20.511366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.839 [2024-07-15 22:40:20.511386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.839 [2024-07-15 22:40:20.511406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.839 [2024-07-15 22:40:20.511418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.840 [2024-07-15 22:40:20.511425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.840 [2024-07-15 22:40:20.511445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.840 [2024-07-15 22:40:20.511468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.840 [2024-07-15 22:40:20.511487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.840 [2024-07-15 22:40:20.511506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.840 [2024-07-15 22:40:20.511524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.840 [2024-07-15 22:40:20.511544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.840 [2024-07-15 22:40:20.511562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.840 [2024-07-15 22:40:20.511581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.840 [2024-07-15 22:40:20.511600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.840 [2024-07-15 22:40:20.511619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.840 [2024-07-15 22:40:20.511661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.511987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.511999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.512006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.512019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.512025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.512669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.512682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.512697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.512704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.512716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.512723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.512736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.512743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.512755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.512762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.512775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.512781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.840 [2024-07-15 22:40:20.512794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.840 [2024-07-15 22:40:20.512801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.512814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.512824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.512837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.512844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.512856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.512864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.512877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.512884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.512897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.512904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.512916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.512923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.512935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.512942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.512955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.512961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.512973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.512980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.512992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.512999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.841 [2024-07-15 22:40:20.513576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.841 [2024-07-15 22:40:20.513583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.513595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.513603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.842 [2024-07-15 22:40:20.514878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.842 [2024-07-15 22:40:20.514885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.514897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.514904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.514916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.514923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.514938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.514945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.514957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.514964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.514977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.514984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.514996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.843 [2024-07-15 22:40:20.515472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.515986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.515993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.516005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.516014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.843 [2024-07-15 22:40:20.516026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.843 [2024-07-15 22:40:20.516033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.844 [2024-07-15 22:40:20.516812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.844 [2024-07-15 22:40:20.516819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.516831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.516838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.516850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.516857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.516869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.516876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.517984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.517991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.518003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.518010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.518022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.518029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.518041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.518048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.518060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.518069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.518082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.518089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.518101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.518108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.518122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.845 [2024-07-15 22:40:20.518128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.845 [2024-07-15 22:40:20.518141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.846 [2024-07-15 22:40:20.518838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.846 [2024-07-15 22:40:20.518857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.846 [2024-07-15 22:40:20.518870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.518877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.518889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.518896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.518908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.518915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.518928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.518935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.518947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.518953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.518965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.518972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.518986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.518993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.519013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.519032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.519051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.519070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.519089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.519109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.519128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.847 [2024-07-15 22:40:20.519166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.847 [2024-07-15 22:40:20.519866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.847 [2024-07-15 22:40:20.519878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.519885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.519898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.519905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.848 [2024-07-15 22:40:20.520702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.848 [2024-07-15 22:40:20.520714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.520991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.520998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.521903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.521910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.522079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.522088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.522101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.522108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.522120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.522127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.522139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.522146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.522158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.522165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.522179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.522186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.522198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.522204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.522217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.849 [2024-07-15 22:40:20.522228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.849 [2024-07-15 22:40:20.522241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.850 [2024-07-15 22:40:20.522879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.850 [2024-07-15 22:40:20.522898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.850 [2024-07-15 22:40:20.522910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.851 [2024-07-15 22:40:20.522917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.522929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.522936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.522950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.522957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.523986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.523993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.524005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.524012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.524024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.524031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.851 [2024-07-15 22:40:20.524043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.851 [2024-07-15 22:40:20.524050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.524735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.524744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.525367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.525378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.525392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.852 [2024-07-15 22:40:20.525399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.852 [2024-07-15 22:40:20.525412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.525991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.525998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.526011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.526017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.526030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.526036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.526049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.526055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.526068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.526076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.526089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.526095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.526108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.526115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.526127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.526133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.526145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.526152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.526165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.526172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.526184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.526191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.526203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.853 [2024-07-15 22:40:20.526210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.853 [2024-07-15 22:40:20.526222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.526234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.526253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.526272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.526291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.526310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.526330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.526350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.526369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.526677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.854 [2024-07-15 22:40:20.526696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.526715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.526727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.526734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.854 [2024-07-15 22:40:20.527384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.854 [2024-07-15 22:40:20.527392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:33736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:33776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:33808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.527989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.527996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:33832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:33912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:33928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.528281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.528288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.531717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:33944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.531726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.531739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.531746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.855 [2024-07-15 22:40:20.531758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.855 [2024-07-15 22:40:20.531765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.531777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.531784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.531796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.531803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.531815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.531822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.531834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.531841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.531853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.531860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.531872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.531879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.531891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.531898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.531910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.531918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.531931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.531937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.856 [2024-07-15 22:40:20.532635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.856 [2024-07-15 22:40:20.532651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.532981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.532988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.533010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.533032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.533054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.533076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:33376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:33472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.533438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.857 [2024-07-15 22:40:20.533462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.533485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.533507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.533529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.533551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.857 [2024-07-15 22:40:20.533566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.857 [2024-07-15 22:40:20.533573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:33584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:33616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.533855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.533862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:20.534010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:20.534018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.281888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.281929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.281964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.281973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.281986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.281993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.858 [2024-07-15 22:40:33.282982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.858 [2024-07-15 22:40:33.282990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.283397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.283404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.859 [2024-07-15 22:40:33.284468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.859 [2024-07-15 22:40:33.284475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.859 Received shutdown signal, test time was about 27.048295 seconds 00:24:11.859 00:24:11.859 Latency(us) 00:24:11.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.859 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:11.859 Verification LBA range: start 0x0 length 0x4000 00:24:11.859 Nvme0n1 : 27.05 10336.50 40.38 0.00 0.00 12362.19 983.04 3078254.41 00:24:11.859 =================================================================================================================== 00:24:11.859 Total : 10336.50 40.38 0.00 0.00 12362.19 983.04 3078254.41 00:24:11.859 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.119 rmmod nvme_tcp 00:24:12.119 rmmod nvme_fabrics 00:24:12.119 rmmod nvme_keyring 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 110816 ']' 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 110816 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 110816 ']' 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 110816 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110816 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110816' 00:24:12.119 killing process with pid 110816 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 110816 00:24:12.119 22:40:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 110816 00:24:12.379 22:40:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.379 22:40:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.379 22:40:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.379 22:40:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.379 22:40:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.379 22:40:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.379 22:40:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.379 22:40:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.309 22:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.309 00:24:14.309 real 0m38.882s 00:24:14.309 user 1m45.883s 00:24:14.309 sys 0m10.299s 00:24:14.309 22:40:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:14.309 22:40:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:14.309 ************************************ 00:24:14.309 END TEST nvmf_host_multipath_status 00:24:14.309 ************************************ 00:24:14.309 22:40:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:14.309 22:40:38 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:14.309 22:40:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:14.309 22:40:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.309 22:40:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.566 ************************************ 00:24:14.566 START TEST nvmf_discovery_remove_ifc 00:24:14.566 ************************************ 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:14.566 * Looking for test storage... 00:24:14.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.566 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.567 22:40:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:19.833 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.833 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:19.833 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:19.834 Found net devices under 0000:86:00.0: cvl_0_0 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:19.834 Found net devices under 0000:86:00.1: cvl_0_1 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:24:19.834 00:24:19.834 --- 10.0.0.2 ping statistics --- 00:24:19.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.834 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:24:19.834 00:24:19.834 --- 10.0.0.1 ping statistics --- 00:24:19.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.834 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=119406 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 119406 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 119406 ']' 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:19.834 22:40:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:20.093 [2024-07-15 22:40:43.844516] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:24:20.093 [2024-07-15 22:40:43.844562] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.093 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.093 [2024-07-15 22:40:43.901751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.093 [2024-07-15 22:40:43.981223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.093 [2024-07-15 22:40:43.981261] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.093 [2024-07-15 22:40:43.981268] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.093 [2024-07-15 22:40:43.981274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.093 [2024-07-15 22:40:43.981279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.093 [2024-07-15 22:40:43.981296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.030 [2024-07-15 22:40:44.692918] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.030 [2024-07-15 22:40:44.701059] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:21.030 null0 00:24:21.030 [2024-07-15 22:40:44.733056] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=119638 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 119638 /tmp/host.sock 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 119638 ']' 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:21.030 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.030 22:40:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.030 [2024-07-15 22:40:44.800934] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:24:21.030 [2024-07-15 22:40:44.800973] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119638 ] 00:24:21.030 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.030 [2024-07-15 22:40:44.855506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.030 [2024-07-15 22:40:44.935422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.971 22:40:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.006 [2024-07-15 22:40:46.761839] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:23.006 [2024-07-15 22:40:46.761859] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:23.006 [2024-07-15 22:40:46.761873] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:23.006 [2024-07-15 22:40:46.890287] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:23.266 [2024-07-15 22:40:47.035523] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:23.266 [2024-07-15 22:40:47.035568] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:23.266 [2024-07-15 22:40:47.035587] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:23.266 [2024-07-15 22:40:47.035599] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:23.266 [2024-07-15 22:40:47.035621] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:23.266 [2024-07-15 22:40:47.040560] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xd10e30 was disconnected and freed. delete nvme_qpair. 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:23.266 22:40:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:24.645 22:40:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:24.645 22:40:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:24.645 22:40:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:24.645 22:40:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.645 22:40:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:24.645 22:40:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:24.645 22:40:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:24.645 22:40:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.645 22:40:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:24.645 22:40:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:25.581 22:40:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:25.581 22:40:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:25.581 22:40:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:25.581 22:40:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.581 22:40:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:25.581 22:40:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:25.581 22:40:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:25.581 22:40:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.581 22:40:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:25.581 22:40:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:26.518 22:40:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:26.518 22:40:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.518 22:40:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:26.518 22:40:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.518 22:40:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:26.518 22:40:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:26.518 22:40:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:26.518 22:40:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.518 22:40:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:26.518 22:40:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:27.457 22:40:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:27.457 22:40:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.457 22:40:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:27.457 22:40:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:27.457 22:40:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.457 22:40:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:27.457 22:40:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:27.457 22:40:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.457 22:40:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:27.457 22:40:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:28.836 22:40:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:28.836 22:40:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.836 22:40:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:28.836 22:40:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:28.836 22:40:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.836 22:40:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:28.836 22:40:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:28.836 22:40:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.836 22:40:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:28.836 22:40:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:28.836 [2024-07-15 22:40:52.486886] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:28.836 [2024-07-15 22:40:52.486926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.836 [2024-07-15 22:40:52.486941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.836 [2024-07-15 22:40:52.486951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.836 [2024-07-15 22:40:52.486957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.836 [2024-07-15 22:40:52.486964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.836 [2024-07-15 22:40:52.486971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.836 [2024-07-15 22:40:52.486978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.836 [2024-07-15 22:40:52.486984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.836 [2024-07-15 22:40:52.486991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.836 [2024-07-15 22:40:52.486997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.836 [2024-07-15 22:40:52.487004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7690 is same with the state(5) to be set 00:24:28.836 [2024-07-15 22:40:52.496908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd7690 (9): Bad file descriptor 00:24:28.837 [2024-07-15 22:40:52.506947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.784 22:40:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:29.784 22:40:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:29.784 22:40:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.784 22:40:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:29.784 22:40:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.784 22:40:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:29.784 22:40:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:29.784 [2024-07-15 22:40:53.538294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:29.784 [2024-07-15 22:40:53.538335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcd7690 with addr=10.0.0.2, port=4420 00:24:29.784 [2024-07-15 22:40:53.538349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7690 is same with the state(5) to be set 00:24:29.784 [2024-07-15 22:40:53.538377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd7690 (9): Bad file descriptor 00:24:29.784 [2024-07-15 22:40:53.538777] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:29.784 [2024-07-15 22:40:53.538797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.784 [2024-07-15 22:40:53.538806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.784 [2024-07-15 22:40:53.538816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.784 [2024-07-15 22:40:53.538833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.784 [2024-07-15 22:40:53.538843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.784 22:40:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.784 22:40:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:29.784 22:40:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:30.717 [2024-07-15 22:40:54.541323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:30.717 [2024-07-15 22:40:54.541346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:30.717 [2024-07-15 22:40:54.541353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:30.717 [2024-07-15 22:40:54.541360] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:30.717 [2024-07-15 22:40:54.541371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:30.717 [2024-07-15 22:40:54.541388] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:30.717 [2024-07-15 22:40:54.541409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.717 [2024-07-15 22:40:54.541419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.717 [2024-07-15 22:40:54.541428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.717 [2024-07-15 22:40:54.541435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.717 [2024-07-15 22:40:54.541442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.717 [2024-07-15 22:40:54.541448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.717 [2024-07-15 22:40:54.541455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.717 [2024-07-15 22:40:54.541461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.717 [2024-07-15 22:40:54.541469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:30.717 [2024-07-15 22:40:54.541476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:30.717 [2024-07-15 22:40:54.541482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:30.717 [2024-07-15 22:40:54.541633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcd6a80 (9): Bad file descriptor 00:24:30.717 [2024-07-15 22:40:54.542644] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:30.717 [2024-07-15 22:40:54.542653] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:30.717 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:30.717 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.717 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:30.717 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.717 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:30.717 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.717 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:30.717 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.717 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:30.717 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.717 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.975 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:30.975 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:30.975 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:30.975 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:30.975 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.975 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:30.975 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:30.975 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:30.975 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.975 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:30.975 22:40:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:31.910 22:40:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:31.910 22:40:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.910 22:40:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:31.910 22:40:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.910 22:40:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:31.910 22:40:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:31.910 22:40:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:31.910 22:40:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.910 22:40:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:31.910 22:40:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:32.845 [2024-07-15 22:40:56.556106] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:32.845 [2024-07-15 22:40:56.556123] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:32.845 [2024-07-15 22:40:56.556135] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:32.845 [2024-07-15 22:40:56.644403] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:32.845 22:40:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:32.845 22:40:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:32.845 22:40:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:32.845 22:40:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:32.845 22:40:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.845 22:40:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:32.845 22:40:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:32.845 22:40:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.103 [2024-07-15 22:40:56.829047] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:33.103 [2024-07-15 22:40:56.829081] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:33.103 [2024-07-15 22:40:56.829100] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:33.103 [2024-07-15 22:40:56.829113] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:33.103 [2024-07-15 22:40:56.829123] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:33.103 [2024-07-15 22:40:56.835736] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xced8d0 was disconnected and freed. delete nvme_qpair. 00:24:33.103 22:40:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:33.103 22:40:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 119638 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 119638 ']' 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 119638 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119638 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119638' 00:24:34.039 killing process with pid 119638 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 119638 00:24:34.039 22:40:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 119638 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:34.296 rmmod nvme_tcp 00:24:34.296 rmmod nvme_fabrics 00:24:34.296 rmmod nvme_keyring 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 119406 ']' 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 119406 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 119406 ']' 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 119406 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119406 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119406' 00:24:34.296 killing process with pid 119406 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 119406 00:24:34.296 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 119406 00:24:34.555 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:34.555 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:34.555 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:34.555 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:34.555 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:34.555 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.555 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:34.555 22:40:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.086 22:41:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:37.086 00:24:37.086 real 0m22.149s 00:24:37.086 user 0m28.853s 00:24:37.086 sys 0m5.395s 00:24:37.086 22:41:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:37.086 22:41:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:37.086 ************************************ 00:24:37.086 END TEST nvmf_discovery_remove_ifc 00:24:37.086 ************************************ 00:24:37.086 22:41:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:37.086 22:41:00 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:37.086 22:41:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:37.086 22:41:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:37.086 22:41:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:37.086 ************************************ 00:24:37.086 START TEST nvmf_identify_kernel_target 00:24:37.086 ************************************ 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:37.086 * Looking for test storage... 00:24:37.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:37.086 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:37.087 22:41:00 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.363 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:42.363 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:42.363 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:42.363 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:42.364 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:42.364 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:42.364 Found net devices under 0000:86:00.0: cvl_0_0 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:42.364 Found net devices under 0000:86:00.1: cvl_0_1 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:42.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:42.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:24:42.364 00:24:42.364 --- 10.0.0.2 ping statistics --- 00:24:42.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.364 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:42.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:42.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:24:42.364 00:24:42.364 --- 10.0.0.1 ping statistics --- 00:24:42.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:42.364 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.364 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:42.365 22:41:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:43.744 Waiting for block devices as requested 00:24:44.003 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:44.004 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:44.004 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:44.004 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:44.262 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:44.262 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:44.262 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:44.262 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:44.521 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:44.521 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:44.521 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:44.521 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:44.781 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:44.781 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:44.781 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:45.040 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:45.040 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:45.040 No valid GPT data, bailing 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:45.040 22:41:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:45.301 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:45.301 00:24:45.301 Discovery Log Number of Records 2, Generation counter 2 00:24:45.301 =====Discovery Log Entry 0====== 00:24:45.301 trtype: tcp 00:24:45.301 adrfam: ipv4 00:24:45.301 subtype: current discovery subsystem 00:24:45.301 treq: not specified, sq flow control disable supported 00:24:45.301 portid: 1 00:24:45.301 trsvcid: 4420 00:24:45.301 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:45.301 traddr: 10.0.0.1 00:24:45.301 eflags: none 00:24:45.301 sectype: none 00:24:45.301 =====Discovery Log Entry 1====== 00:24:45.301 trtype: tcp 00:24:45.301 adrfam: ipv4 00:24:45.301 subtype: nvme subsystem 00:24:45.301 treq: not specified, sq flow control disable supported 00:24:45.301 portid: 1 00:24:45.301 trsvcid: 4420 00:24:45.301 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:45.301 traddr: 10.0.0.1 00:24:45.301 eflags: none 00:24:45.301 sectype: none 00:24:45.301 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:45.301 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:45.301 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.301 ===================================================== 00:24:45.301 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:45.301 ===================================================== 00:24:45.301 Controller Capabilities/Features 00:24:45.301 ================================ 00:24:45.301 Vendor ID: 0000 00:24:45.301 Subsystem Vendor ID: 0000 00:24:45.301 Serial Number: 6d2f11105ca2dfd94769 00:24:45.301 Model Number: Linux 00:24:45.301 Firmware Version: 6.7.0-68 00:24:45.301 Recommended Arb Burst: 0 00:24:45.301 IEEE OUI Identifier: 00 00 00 00:24:45.301 Multi-path I/O 00:24:45.301 May have multiple subsystem ports: No 00:24:45.301 May have multiple controllers: No 00:24:45.301 Associated with SR-IOV VF: No 00:24:45.301 Max Data Transfer Size: Unlimited 00:24:45.301 Max Number of Namespaces: 0 00:24:45.301 Max Number of I/O Queues: 1024 00:24:45.301 NVMe Specification Version (VS): 1.3 00:24:45.301 NVMe Specification Version (Identify): 1.3 00:24:45.301 Maximum Queue Entries: 1024 00:24:45.301 Contiguous Queues Required: No 00:24:45.301 Arbitration Mechanisms Supported 00:24:45.301 Weighted Round Robin: Not Supported 00:24:45.301 Vendor Specific: Not Supported 00:24:45.301 Reset Timeout: 7500 ms 00:24:45.301 Doorbell Stride: 4 bytes 00:24:45.301 NVM Subsystem Reset: Not Supported 00:24:45.301 Command Sets Supported 00:24:45.301 NVM Command Set: Supported 00:24:45.301 Boot Partition: Not Supported 00:24:45.301 Memory Page Size Minimum: 4096 bytes 00:24:45.301 Memory Page Size Maximum: 4096 bytes 00:24:45.301 Persistent Memory Region: Not Supported 00:24:45.301 Optional Asynchronous Events Supported 00:24:45.301 Namespace Attribute Notices: Not Supported 00:24:45.301 Firmware Activation Notices: Not Supported 00:24:45.301 ANA Change Notices: Not Supported 00:24:45.301 PLE Aggregate Log Change Notices: Not Supported 00:24:45.301 LBA Status Info Alert Notices: Not Supported 00:24:45.301 EGE Aggregate Log Change Notices: Not Supported 00:24:45.301 Normal NVM Subsystem Shutdown event: Not Supported 00:24:45.301 Zone Descriptor Change Notices: Not Supported 00:24:45.301 Discovery Log Change Notices: Supported 00:24:45.301 Controller Attributes 00:24:45.301 128-bit Host Identifier: Not Supported 00:24:45.301 Non-Operational Permissive Mode: Not Supported 00:24:45.301 NVM Sets: Not Supported 00:24:45.301 Read Recovery Levels: Not Supported 00:24:45.301 Endurance Groups: Not Supported 00:24:45.301 Predictable Latency Mode: Not Supported 00:24:45.301 Traffic Based Keep ALive: Not Supported 00:24:45.301 Namespace Granularity: Not Supported 00:24:45.301 SQ Associations: Not Supported 00:24:45.301 UUID List: Not Supported 00:24:45.301 Multi-Domain Subsystem: Not Supported 00:24:45.301 Fixed Capacity Management: Not Supported 00:24:45.301 Variable Capacity Management: Not Supported 00:24:45.301 Delete Endurance Group: Not Supported 00:24:45.301 Delete NVM Set: Not Supported 00:24:45.301 Extended LBA Formats Supported: Not Supported 00:24:45.301 Flexible Data Placement Supported: Not Supported 00:24:45.301 00:24:45.301 Controller Memory Buffer Support 00:24:45.301 ================================ 00:24:45.301 Supported: No 00:24:45.301 00:24:45.301 Persistent Memory Region Support 00:24:45.301 ================================ 00:24:45.301 Supported: No 00:24:45.301 00:24:45.301 Admin Command Set Attributes 00:24:45.301 ============================ 00:24:45.301 Security Send/Receive: Not Supported 00:24:45.301 Format NVM: Not Supported 00:24:45.301 Firmware Activate/Download: Not Supported 00:24:45.301 Namespace Management: Not Supported 00:24:45.301 Device Self-Test: Not Supported 00:24:45.301 Directives: Not Supported 00:24:45.301 NVMe-MI: Not Supported 00:24:45.301 Virtualization Management: Not Supported 00:24:45.301 Doorbell Buffer Config: Not Supported 00:24:45.301 Get LBA Status Capability: Not Supported 00:24:45.301 Command & Feature Lockdown Capability: Not Supported 00:24:45.301 Abort Command Limit: 1 00:24:45.301 Async Event Request Limit: 1 00:24:45.301 Number of Firmware Slots: N/A 00:24:45.301 Firmware Slot 1 Read-Only: N/A 00:24:45.301 Firmware Activation Without Reset: N/A 00:24:45.301 Multiple Update Detection Support: N/A 00:24:45.301 Firmware Update Granularity: No Information Provided 00:24:45.301 Per-Namespace SMART Log: No 00:24:45.301 Asymmetric Namespace Access Log Page: Not Supported 00:24:45.301 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:45.301 Command Effects Log Page: Not Supported 00:24:45.301 Get Log Page Extended Data: Supported 00:24:45.301 Telemetry Log Pages: Not Supported 00:24:45.301 Persistent Event Log Pages: Not Supported 00:24:45.301 Supported Log Pages Log Page: May Support 00:24:45.302 Commands Supported & Effects Log Page: Not Supported 00:24:45.302 Feature Identifiers & Effects Log Page:May Support 00:24:45.302 NVMe-MI Commands & Effects Log Page: May Support 00:24:45.302 Data Area 4 for Telemetry Log: Not Supported 00:24:45.302 Error Log Page Entries Supported: 1 00:24:45.302 Keep Alive: Not Supported 00:24:45.302 00:24:45.302 NVM Command Set Attributes 00:24:45.302 ========================== 00:24:45.302 Submission Queue Entry Size 00:24:45.302 Max: 1 00:24:45.302 Min: 1 00:24:45.302 Completion Queue Entry Size 00:24:45.302 Max: 1 00:24:45.302 Min: 1 00:24:45.302 Number of Namespaces: 0 00:24:45.302 Compare Command: Not Supported 00:24:45.302 Write Uncorrectable Command: Not Supported 00:24:45.302 Dataset Management Command: Not Supported 00:24:45.302 Write Zeroes Command: Not Supported 00:24:45.302 Set Features Save Field: Not Supported 00:24:45.302 Reservations: Not Supported 00:24:45.302 Timestamp: Not Supported 00:24:45.302 Copy: Not Supported 00:24:45.302 Volatile Write Cache: Not Present 00:24:45.302 Atomic Write Unit (Normal): 1 00:24:45.302 Atomic Write Unit (PFail): 1 00:24:45.302 Atomic Compare & Write Unit: 1 00:24:45.302 Fused Compare & Write: Not Supported 00:24:45.302 Scatter-Gather List 00:24:45.302 SGL Command Set: Supported 00:24:45.302 SGL Keyed: Not Supported 00:24:45.302 SGL Bit Bucket Descriptor: Not Supported 00:24:45.302 SGL Metadata Pointer: Not Supported 00:24:45.302 Oversized SGL: Not Supported 00:24:45.302 SGL Metadata Address: Not Supported 00:24:45.302 SGL Offset: Supported 00:24:45.302 Transport SGL Data Block: Not Supported 00:24:45.302 Replay Protected Memory Block: Not Supported 00:24:45.302 00:24:45.302 Firmware Slot Information 00:24:45.302 ========================= 00:24:45.302 Active slot: 0 00:24:45.302 00:24:45.302 00:24:45.302 Error Log 00:24:45.302 ========= 00:24:45.302 00:24:45.302 Active Namespaces 00:24:45.302 ================= 00:24:45.302 Discovery Log Page 00:24:45.302 ================== 00:24:45.302 Generation Counter: 2 00:24:45.302 Number of Records: 2 00:24:45.302 Record Format: 0 00:24:45.302 00:24:45.302 Discovery Log Entry 0 00:24:45.302 ---------------------- 00:24:45.302 Transport Type: 3 (TCP) 00:24:45.302 Address Family: 1 (IPv4) 00:24:45.302 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:45.302 Entry Flags: 00:24:45.302 Duplicate Returned Information: 0 00:24:45.302 Explicit Persistent Connection Support for Discovery: 0 00:24:45.302 Transport Requirements: 00:24:45.302 Secure Channel: Not Specified 00:24:45.302 Port ID: 1 (0x0001) 00:24:45.302 Controller ID: 65535 (0xffff) 00:24:45.302 Admin Max SQ Size: 32 00:24:45.302 Transport Service Identifier: 4420 00:24:45.302 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:45.302 Transport Address: 10.0.0.1 00:24:45.302 Discovery Log Entry 1 00:24:45.302 ---------------------- 00:24:45.302 Transport Type: 3 (TCP) 00:24:45.302 Address Family: 1 (IPv4) 00:24:45.302 Subsystem Type: 2 (NVM Subsystem) 00:24:45.302 Entry Flags: 00:24:45.302 Duplicate Returned Information: 0 00:24:45.302 Explicit Persistent Connection Support for Discovery: 0 00:24:45.302 Transport Requirements: 00:24:45.302 Secure Channel: Not Specified 00:24:45.302 Port ID: 1 (0x0001) 00:24:45.302 Controller ID: 65535 (0xffff) 00:24:45.302 Admin Max SQ Size: 32 00:24:45.302 Transport Service Identifier: 4420 00:24:45.302 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:45.302 Transport Address: 10.0.0.1 00:24:45.302 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:45.302 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.302 get_feature(0x01) failed 00:24:45.302 get_feature(0x02) failed 00:24:45.302 get_feature(0x04) failed 00:24:45.302 ===================================================== 00:24:45.302 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:45.302 ===================================================== 00:24:45.302 Controller Capabilities/Features 00:24:45.302 ================================ 00:24:45.302 Vendor ID: 0000 00:24:45.302 Subsystem Vendor ID: 0000 00:24:45.302 Serial Number: b41cdfc3e1c16ecb41a9 00:24:45.302 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:45.302 Firmware Version: 6.7.0-68 00:24:45.302 Recommended Arb Burst: 6 00:24:45.302 IEEE OUI Identifier: 00 00 00 00:24:45.302 Multi-path I/O 00:24:45.302 May have multiple subsystem ports: Yes 00:24:45.302 May have multiple controllers: Yes 00:24:45.302 Associated with SR-IOV VF: No 00:24:45.302 Max Data Transfer Size: Unlimited 00:24:45.302 Max Number of Namespaces: 1024 00:24:45.302 Max Number of I/O Queues: 128 00:24:45.302 NVMe Specification Version (VS): 1.3 00:24:45.302 NVMe Specification Version (Identify): 1.3 00:24:45.302 Maximum Queue Entries: 1024 00:24:45.302 Contiguous Queues Required: No 00:24:45.302 Arbitration Mechanisms Supported 00:24:45.302 Weighted Round Robin: Not Supported 00:24:45.302 Vendor Specific: Not Supported 00:24:45.302 Reset Timeout: 7500 ms 00:24:45.302 Doorbell Stride: 4 bytes 00:24:45.302 NVM Subsystem Reset: Not Supported 00:24:45.302 Command Sets Supported 00:24:45.302 NVM Command Set: Supported 00:24:45.302 Boot Partition: Not Supported 00:24:45.302 Memory Page Size Minimum: 4096 bytes 00:24:45.302 Memory Page Size Maximum: 4096 bytes 00:24:45.302 Persistent Memory Region: Not Supported 00:24:45.302 Optional Asynchronous Events Supported 00:24:45.302 Namespace Attribute Notices: Supported 00:24:45.302 Firmware Activation Notices: Not Supported 00:24:45.302 ANA Change Notices: Supported 00:24:45.302 PLE Aggregate Log Change Notices: Not Supported 00:24:45.302 LBA Status Info Alert Notices: Not Supported 00:24:45.302 EGE Aggregate Log Change Notices: Not Supported 00:24:45.302 Normal NVM Subsystem Shutdown event: Not Supported 00:24:45.302 Zone Descriptor Change Notices: Not Supported 00:24:45.302 Discovery Log Change Notices: Not Supported 00:24:45.302 Controller Attributes 00:24:45.302 128-bit Host Identifier: Supported 00:24:45.302 Non-Operational Permissive Mode: Not Supported 00:24:45.302 NVM Sets: Not Supported 00:24:45.302 Read Recovery Levels: Not Supported 00:24:45.302 Endurance Groups: Not Supported 00:24:45.302 Predictable Latency Mode: Not Supported 00:24:45.302 Traffic Based Keep ALive: Supported 00:24:45.302 Namespace Granularity: Not Supported 00:24:45.302 SQ Associations: Not Supported 00:24:45.302 UUID List: Not Supported 00:24:45.302 Multi-Domain Subsystem: Not Supported 00:24:45.302 Fixed Capacity Management: Not Supported 00:24:45.302 Variable Capacity Management: Not Supported 00:24:45.302 Delete Endurance Group: Not Supported 00:24:45.302 Delete NVM Set: Not Supported 00:24:45.302 Extended LBA Formats Supported: Not Supported 00:24:45.302 Flexible Data Placement Supported: Not Supported 00:24:45.302 00:24:45.302 Controller Memory Buffer Support 00:24:45.302 ================================ 00:24:45.302 Supported: No 00:24:45.302 00:24:45.302 Persistent Memory Region Support 00:24:45.302 ================================ 00:24:45.302 Supported: No 00:24:45.302 00:24:45.302 Admin Command Set Attributes 00:24:45.302 ============================ 00:24:45.302 Security Send/Receive: Not Supported 00:24:45.302 Format NVM: Not Supported 00:24:45.302 Firmware Activate/Download: Not Supported 00:24:45.302 Namespace Management: Not Supported 00:24:45.303 Device Self-Test: Not Supported 00:24:45.303 Directives: Not Supported 00:24:45.303 NVMe-MI: Not Supported 00:24:45.303 Virtualization Management: Not Supported 00:24:45.303 Doorbell Buffer Config: Not Supported 00:24:45.303 Get LBA Status Capability: Not Supported 00:24:45.303 Command & Feature Lockdown Capability: Not Supported 00:24:45.303 Abort Command Limit: 4 00:24:45.303 Async Event Request Limit: 4 00:24:45.303 Number of Firmware Slots: N/A 00:24:45.303 Firmware Slot 1 Read-Only: N/A 00:24:45.303 Firmware Activation Without Reset: N/A 00:24:45.303 Multiple Update Detection Support: N/A 00:24:45.303 Firmware Update Granularity: No Information Provided 00:24:45.303 Per-Namespace SMART Log: Yes 00:24:45.303 Asymmetric Namespace Access Log Page: Supported 00:24:45.303 ANA Transition Time : 10 sec 00:24:45.303 00:24:45.303 Asymmetric Namespace Access Capabilities 00:24:45.303 ANA Optimized State : Supported 00:24:45.303 ANA Non-Optimized State : Supported 00:24:45.303 ANA Inaccessible State : Supported 00:24:45.303 ANA Persistent Loss State : Supported 00:24:45.303 ANA Change State : Supported 00:24:45.303 ANAGRPID is not changed : No 00:24:45.303 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:45.303 00:24:45.303 ANA Group Identifier Maximum : 128 00:24:45.303 Number of ANA Group Identifiers : 128 00:24:45.303 Max Number of Allowed Namespaces : 1024 00:24:45.303 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:45.303 Command Effects Log Page: Supported 00:24:45.303 Get Log Page Extended Data: Supported 00:24:45.303 Telemetry Log Pages: Not Supported 00:24:45.303 Persistent Event Log Pages: Not Supported 00:24:45.303 Supported Log Pages Log Page: May Support 00:24:45.303 Commands Supported & Effects Log Page: Not Supported 00:24:45.303 Feature Identifiers & Effects Log Page:May Support 00:24:45.303 NVMe-MI Commands & Effects Log Page: May Support 00:24:45.303 Data Area 4 for Telemetry Log: Not Supported 00:24:45.303 Error Log Page Entries Supported: 128 00:24:45.303 Keep Alive: Supported 00:24:45.303 Keep Alive Granularity: 1000 ms 00:24:45.303 00:24:45.303 NVM Command Set Attributes 00:24:45.303 ========================== 00:24:45.303 Submission Queue Entry Size 00:24:45.303 Max: 64 00:24:45.303 Min: 64 00:24:45.303 Completion Queue Entry Size 00:24:45.303 Max: 16 00:24:45.303 Min: 16 00:24:45.303 Number of Namespaces: 1024 00:24:45.303 Compare Command: Not Supported 00:24:45.303 Write Uncorrectable Command: Not Supported 00:24:45.303 Dataset Management Command: Supported 00:24:45.303 Write Zeroes Command: Supported 00:24:45.303 Set Features Save Field: Not Supported 00:24:45.303 Reservations: Not Supported 00:24:45.303 Timestamp: Not Supported 00:24:45.303 Copy: Not Supported 00:24:45.303 Volatile Write Cache: Present 00:24:45.303 Atomic Write Unit (Normal): 1 00:24:45.303 Atomic Write Unit (PFail): 1 00:24:45.303 Atomic Compare & Write Unit: 1 00:24:45.303 Fused Compare & Write: Not Supported 00:24:45.303 Scatter-Gather List 00:24:45.303 SGL Command Set: Supported 00:24:45.303 SGL Keyed: Not Supported 00:24:45.303 SGL Bit Bucket Descriptor: Not Supported 00:24:45.303 SGL Metadata Pointer: Not Supported 00:24:45.303 Oversized SGL: Not Supported 00:24:45.303 SGL Metadata Address: Not Supported 00:24:45.303 SGL Offset: Supported 00:24:45.303 Transport SGL Data Block: Not Supported 00:24:45.303 Replay Protected Memory Block: Not Supported 00:24:45.303 00:24:45.303 Firmware Slot Information 00:24:45.303 ========================= 00:24:45.303 Active slot: 0 00:24:45.303 00:24:45.303 Asymmetric Namespace Access 00:24:45.303 =========================== 00:24:45.303 Change Count : 0 00:24:45.303 Number of ANA Group Descriptors : 1 00:24:45.303 ANA Group Descriptor : 0 00:24:45.303 ANA Group ID : 1 00:24:45.303 Number of NSID Values : 1 00:24:45.303 Change Count : 0 00:24:45.303 ANA State : 1 00:24:45.303 Namespace Identifier : 1 00:24:45.303 00:24:45.303 Commands Supported and Effects 00:24:45.303 ============================== 00:24:45.303 Admin Commands 00:24:45.303 -------------- 00:24:45.303 Get Log Page (02h): Supported 00:24:45.303 Identify (06h): Supported 00:24:45.303 Abort (08h): Supported 00:24:45.303 Set Features (09h): Supported 00:24:45.303 Get Features (0Ah): Supported 00:24:45.303 Asynchronous Event Request (0Ch): Supported 00:24:45.303 Keep Alive (18h): Supported 00:24:45.303 I/O Commands 00:24:45.303 ------------ 00:24:45.303 Flush (00h): Supported 00:24:45.303 Write (01h): Supported LBA-Change 00:24:45.303 Read (02h): Supported 00:24:45.303 Write Zeroes (08h): Supported LBA-Change 00:24:45.303 Dataset Management (09h): Supported 00:24:45.303 00:24:45.303 Error Log 00:24:45.303 ========= 00:24:45.303 Entry: 0 00:24:45.303 Error Count: 0x3 00:24:45.303 Submission Queue Id: 0x0 00:24:45.303 Command Id: 0x5 00:24:45.303 Phase Bit: 0 00:24:45.303 Status Code: 0x2 00:24:45.303 Status Code Type: 0x0 00:24:45.303 Do Not Retry: 1 00:24:45.303 Error Location: 0x28 00:24:45.303 LBA: 0x0 00:24:45.303 Namespace: 0x0 00:24:45.303 Vendor Log Page: 0x0 00:24:45.303 ----------- 00:24:45.303 Entry: 1 00:24:45.303 Error Count: 0x2 00:24:45.303 Submission Queue Id: 0x0 00:24:45.303 Command Id: 0x5 00:24:45.303 Phase Bit: 0 00:24:45.303 Status Code: 0x2 00:24:45.303 Status Code Type: 0x0 00:24:45.303 Do Not Retry: 1 00:24:45.303 Error Location: 0x28 00:24:45.303 LBA: 0x0 00:24:45.303 Namespace: 0x0 00:24:45.303 Vendor Log Page: 0x0 00:24:45.303 ----------- 00:24:45.303 Entry: 2 00:24:45.303 Error Count: 0x1 00:24:45.303 Submission Queue Id: 0x0 00:24:45.303 Command Id: 0x4 00:24:45.303 Phase Bit: 0 00:24:45.303 Status Code: 0x2 00:24:45.303 Status Code Type: 0x0 00:24:45.303 Do Not Retry: 1 00:24:45.303 Error Location: 0x28 00:24:45.303 LBA: 0x0 00:24:45.303 Namespace: 0x0 00:24:45.303 Vendor Log Page: 0x0 00:24:45.303 00:24:45.303 Number of Queues 00:24:45.303 ================ 00:24:45.303 Number of I/O Submission Queues: 128 00:24:45.303 Number of I/O Completion Queues: 128 00:24:45.303 00:24:45.303 ZNS Specific Controller Data 00:24:45.303 ============================ 00:24:45.303 Zone Append Size Limit: 0 00:24:45.303 00:24:45.303 00:24:45.303 Active Namespaces 00:24:45.303 ================= 00:24:45.303 get_feature(0x05) failed 00:24:45.303 Namespace ID:1 00:24:45.303 Command Set Identifier: NVM (00h) 00:24:45.303 Deallocate: Supported 00:24:45.303 Deallocated/Unwritten Error: Not Supported 00:24:45.303 Deallocated Read Value: Unknown 00:24:45.303 Deallocate in Write Zeroes: Not Supported 00:24:45.303 Deallocated Guard Field: 0xFFFF 00:24:45.303 Flush: Supported 00:24:45.303 Reservation: Not Supported 00:24:45.303 Namespace Sharing Capabilities: Multiple Controllers 00:24:45.303 Size (in LBAs): 1953525168 (931GiB) 00:24:45.303 Capacity (in LBAs): 1953525168 (931GiB) 00:24:45.303 Utilization (in LBAs): 1953525168 (931GiB) 00:24:45.303 UUID: c49417dc-248c-4ded-9904-574ff86745f3 00:24:45.303 Thin Provisioning: Not Supported 00:24:45.303 Per-NS Atomic Units: Yes 00:24:45.303 Atomic Boundary Size (Normal): 0 00:24:45.303 Atomic Boundary Size (PFail): 0 00:24:45.303 Atomic Boundary Offset: 0 00:24:45.303 NGUID/EUI64 Never Reused: No 00:24:45.303 ANA group ID: 1 00:24:45.303 Namespace Write Protected: No 00:24:45.303 Number of LBA Formats: 1 00:24:45.303 Current LBA Format: LBA Format #00 00:24:45.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:45.303 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:45.303 rmmod nvme_tcp 00:24:45.303 rmmod nvme_fabrics 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.303 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.304 22:41:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.871 22:41:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:47.871 22:41:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:47.871 22:41:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:47.871 22:41:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:47.871 22:41:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:47.871 22:41:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:47.871 22:41:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:47.871 22:41:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:47.871 22:41:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:47.871 22:41:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:47.871 22:41:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:49.776 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:49.777 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:50.713 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:50.713 00:24:50.713 real 0m14.043s 00:24:50.713 user 0m3.012s 00:24:50.713 sys 0m6.986s 00:24:50.713 22:41:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:50.713 22:41:14 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.713 ************************************ 00:24:50.713 END TEST nvmf_identify_kernel_target 00:24:50.713 ************************************ 00:24:50.713 22:41:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:50.713 22:41:14 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:50.713 22:41:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:50.714 22:41:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.714 22:41:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:50.714 ************************************ 00:24:50.714 START TEST nvmf_auth_host 00:24:50.714 ************************************ 00:24:50.714 22:41:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:50.974 * Looking for test storage... 00:24:50.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:50.974 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:50.975 22:41:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:56.251 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:56.251 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:56.251 Found net devices under 0000:86:00.0: cvl_0_0 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:56.251 Found net devices under 0000:86:00.1: cvl_0_1 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.251 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.252 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.252 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.252 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:56.252 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:56.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:24:56.511 00:24:56.511 --- 10.0.0.2 ping statistics --- 00:24:56.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.511 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:24:56.511 00:24:56.511 --- 10.0.0.1 ping statistics --- 00:24:56.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.511 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=131298 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 131298 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 131298 ']' 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.511 22:41:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=25b4aadb9516b262a16a1b50b5b69153 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Vt6 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 25b4aadb9516b262a16a1b50b5b69153 0 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 25b4aadb9516b262a16a1b50b5b69153 0 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=25b4aadb9516b262a16a1b50b5b69153 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Vt6 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Vt6 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Vt6 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dbc6122ffd5520bc037c1b2741bba9ba6feb9948705055214d4439c26b18ac40 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.P3f 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dbc6122ffd5520bc037c1b2741bba9ba6feb9948705055214d4439c26b18ac40 3 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dbc6122ffd5520bc037c1b2741bba9ba6feb9948705055214d4439c26b18ac40 3 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dbc6122ffd5520bc037c1b2741bba9ba6feb9948705055214d4439c26b18ac40 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.P3f 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.P3f 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.P3f 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5334db7a7f1e49f65ddf79b960926a4b8fb0bb58f8315504 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.AYi 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5334db7a7f1e49f65ddf79b960926a4b8fb0bb58f8315504 0 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5334db7a7f1e49f65ddf79b960926a4b8fb0bb58f8315504 0 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5334db7a7f1e49f65ddf79b960926a4b8fb0bb58f8315504 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.AYi 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.AYi 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.AYi 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b9662b71757f59afb2f26abd7c4d6fb7caa721bf09ce73c7 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.XU2 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b9662b71757f59afb2f26abd7c4d6fb7caa721bf09ce73c7 2 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b9662b71757f59afb2f26abd7c4d6fb7caa721bf09ce73c7 2 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b9662b71757f59afb2f26abd7c4d6fb7caa721bf09ce73c7 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:57.447 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.XU2 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.XU2 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.XU2 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=669c7663fe1a465cb05f20b88bd8e9fa 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FkP 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 669c7663fe1a465cb05f20b88bd8e9fa 1 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 669c7663fe1a465cb05f20b88bd8e9fa 1 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=669c7663fe1a465cb05f20b88bd8e9fa 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FkP 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FkP 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.FkP 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7eadca43f19f073d90a461fcb0160dc7 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.T9W 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7eadca43f19f073d90a461fcb0160dc7 1 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7eadca43f19f073d90a461fcb0160dc7 1 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7eadca43f19f073d90a461fcb0160dc7 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.T9W 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.T9W 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.T9W 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=92475d92d382d1d6870b6551a2c57e3fb799161ca88a783e 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.izZ 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 92475d92d382d1d6870b6551a2c57e3fb799161ca88a783e 2 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 92475d92d382d1d6870b6551a2c57e3fb799161ca88a783e 2 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=92475d92d382d1d6870b6551a2c57e3fb799161ca88a783e 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.izZ 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.izZ 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.izZ 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1877d35ca51b1c829017cfeba42508c3 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Fyi 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1877d35ca51b1c829017cfeba42508c3 0 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1877d35ca51b1c829017cfeba42508c3 0 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1877d35ca51b1c829017cfeba42508c3 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Fyi 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Fyi 00:24:57.706 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Fyi 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ba96844d027af54c1cec020d0ca4cb0a85e30aa5756d1e2b83a4fef0c2de9662 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dZ9 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ba96844d027af54c1cec020d0ca4cb0a85e30aa5756d1e2b83a4fef0c2de9662 3 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ba96844d027af54c1cec020d0ca4cb0a85e30aa5756d1e2b83a4fef0c2de9662 3 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ba96844d027af54c1cec020d0ca4cb0a85e30aa5756d1e2b83a4fef0c2de9662 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dZ9 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dZ9 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.dZ9 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 131298 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 131298 ']' 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Vt6 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.P3f ]] 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.P3f 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.AYi 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.XU2 ]] 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XU2 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.965 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FkP 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.T9W ]] 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T9W 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.izZ 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.223 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Fyi ]] 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Fyi 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.dZ9 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:58.224 22:41:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:58.224 22:41:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:58.224 22:41:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:00.757 Waiting for block devices as requested 00:25:00.757 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:01.016 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:01.016 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:01.016 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:01.016 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:01.275 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:01.275 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:01.275 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:01.534 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:01.534 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:01.534 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:01.534 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:01.794 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:01.794 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:01.794 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:01.794 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:02.053 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:02.622 No valid GPT data, bailing 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:02.622 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:02.886 00:25:02.886 Discovery Log Number of Records 2, Generation counter 2 00:25:02.886 =====Discovery Log Entry 0====== 00:25:02.886 trtype: tcp 00:25:02.886 adrfam: ipv4 00:25:02.886 subtype: current discovery subsystem 00:25:02.886 treq: not specified, sq flow control disable supported 00:25:02.886 portid: 1 00:25:02.886 trsvcid: 4420 00:25:02.886 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:02.886 traddr: 10.0.0.1 00:25:02.886 eflags: none 00:25:02.886 sectype: none 00:25:02.886 =====Discovery Log Entry 1====== 00:25:02.886 trtype: tcp 00:25:02.886 adrfam: ipv4 00:25:02.886 subtype: nvme subsystem 00:25:02.886 treq: not specified, sq flow control disable supported 00:25:02.886 portid: 1 00:25:02.886 trsvcid: 4420 00:25:02.886 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:02.886 traddr: 10.0.0.1 00:25:02.886 eflags: none 00:25:02.886 sectype: none 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.886 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.886 nvme0n1 00:25:02.887 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.887 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.887 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.887 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.887 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.887 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.887 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.887 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.887 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.887 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.146 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.147 22:41:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.147 nvme0n1 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.147 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.406 nvme0n1 00:25:03.406 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.406 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.407 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.667 nvme0n1 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.667 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.927 nvme0n1 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.927 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.187 nvme0n1 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.187 22:41:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.447 nvme0n1 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.447 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.448 nvme0n1 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.448 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.706 nvme0n1 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.706 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.964 nvme0n1 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.964 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.222 22:41:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.222 nvme0n1 00:25:05.222 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.222 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.222 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.222 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.222 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.222 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.481 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.739 nvme0n1 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.739 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.997 nvme0n1 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.997 22:41:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.998 22:41:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.998 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.998 22:41:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.256 nvme0n1 00:25:06.256 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.256 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.256 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.256 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.256 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.256 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.256 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.256 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.256 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.256 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.515 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.515 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.515 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:06.515 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.515 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.515 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.515 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.516 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.776 nvme0n1 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.776 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.777 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.777 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.777 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.777 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.777 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.036 nvme0n1 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.036 22:41:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.603 nvme0n1 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.603 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.891 nvme0n1 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.891 22:41:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.467 nvme0n1 00:25:08.467 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.467 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.468 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.727 nvme0n1 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.727 22:41:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:08.728 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.728 22:41:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.295 nvme0n1 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:09.295 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.296 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.862 nvme0n1 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:09.862 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.863 22:41:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.429 nvme0n1 00:25:10.429 22:41:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.429 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.429 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.429 22:41:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.429 22:41:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.429 22:41:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.689 22:41:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.258 nvme0n1 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:11.258 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.259 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.827 nvme0n1 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.827 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.828 22:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.396 nvme0n1 00:25:12.396 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.396 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.396 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.656 nvme0n1 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.656 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.916 nvme0n1 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:12.916 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.917 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.176 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.176 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.176 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.176 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.176 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.176 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.176 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.176 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.176 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.176 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.176 22:41:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.177 22:41:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.177 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.177 22:41:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.177 nvme0n1 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.177 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.437 nvme0n1 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.437 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.696 nvme0n1 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.696 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.697 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.956 nvme0n1 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:13.956 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.957 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.216 nvme0n1 00:25:14.216 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.216 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.216 22:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.216 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.216 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.216 22:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.216 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.216 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.216 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.216 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.216 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.216 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.216 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.217 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.476 nvme0n1 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.476 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.735 nvme0n1 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.735 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.736 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.736 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.995 nvme0n1 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.995 22:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.254 nvme0n1 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.254 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.512 nvme0n1 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:15.512 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.513 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.770 nvme0n1 00:25:15.770 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.770 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.770 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.770 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.770 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.770 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.770 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.770 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.770 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.770 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.029 22:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.288 nvme0n1 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.288 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.547 nvme0n1 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.547 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.806 nvme0n1 00:25:16.806 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.806 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.806 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.806 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.806 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.064 22:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.323 nvme0n1 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.323 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.891 nvme0n1 00:25:17.891 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.891 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.891 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.891 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.891 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.891 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.891 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.892 22:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.461 nvme0n1 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.461 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.720 nvme0n1 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.720 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.721 22:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.289 nvme0n1 00:25:19.289 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.289 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.289 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.290 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.290 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.548 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.118 nvme0n1 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.118 22:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.687 nvme0n1 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.687 22:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.286 nvme0n1 00:25:21.286 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.286 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.286 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.286 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.286 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.286 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.286 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.286 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.286 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.286 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.545 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.114 nvme0n1 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.114 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.115 22:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.374 nvme0n1 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:22.374 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.375 nvme0n1 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.375 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.635 nvme0n1 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.635 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:22.895 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.896 nvme0n1 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.896 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.155 nvme0n1 00:25:23.156 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.156 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.156 22:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.156 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.156 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.156 22:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.156 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.415 nvme0n1 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.415 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.416 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.675 nvme0n1 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:23.675 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.676 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.935 nvme0n1 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.935 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.936 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.195 nvme0n1 00:25:24.195 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.195 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.195 22:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.195 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.195 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.195 22:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.195 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.196 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.196 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.196 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.196 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.196 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.196 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.457 nvme0n1 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.457 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.770 nvme0n1 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:24.770 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.771 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.031 nvme0n1 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.031 22:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.291 nvme0n1 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.291 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.550 nvme0n1 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.550 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.808 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 nvme0n1 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.067 22:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.326 nvme0n1 00:25:26.326 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.326 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.326 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.326 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.326 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.585 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.844 nvme0n1 00:25:26.844 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.844 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.844 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.844 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.844 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.844 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.844 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.844 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.844 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.844 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.103 22:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.361 nvme0n1 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.361 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.362 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.930 nvme0n1 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.930 22:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.190 nvme0n1 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjViNGFhZGI5NTE2YjI2MmExNmExYjUwYjViNjkxNTNeHZYd: 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: ]] 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZGJjNjEyMmZmZDU1MjBiYzAzN2MxYjI3NDFiYmE5YmE2ZmViOTk0ODcwNTA1NTIxNGQ0NDM5YzI2YjE4YWM0MLrZ81U=: 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.190 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:28.449 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.018 nvme0n1 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.018 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.019 22:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.587 nvme0n1 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjY5Yzc2NjNmZTFhNDY1Y2IwNWYyMGI4OGJkOGU5ZmFYNj0I: 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: ]] 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:N2VhZGNhNDNmMTlmMDczZDkwYTQ2MWZjYjAxNjBkYzfY9AqL: 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.587 22:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.154 nvme0n1 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI0NzVkOTJkMzgyZDFkNjg3MGI2NTUxYTJjNTdlM2ZiNzk5MTYxY2E4OGE3ODNlzZxB2w==: 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: ]] 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MTg3N2QzNWNhNTFiMWM4MjkwMTdjZmViYTQyNTA4YzOhZDvm: 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.154 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.155 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.155 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.155 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.722 nvme0n1 00:25:30.722 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.722 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.722 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.722 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.722 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmE5Njg0NGQwMjdhZjU0YzFjZWMwMjBkMGNhNGNiMGE4NWUzMGFhNTc1NmQxZTJiODNhNGZlZjBjMmRlOTY2MnBN5AU=: 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.982 22:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.550 nvme0n1 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTMzNGRiN2E3ZjFlNDlmNjVkZGY3OWI5NjA5MjZhNGI4ZmIwYmI1OGY4MzE1NTA0WfDSCg==: 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: ]] 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjk2NjJiNzE3NTdmNTlhZmIyZjI2YWJkN2M0ZDZmYjdjYWE3MjFiZjA5Y2U3M2M3nyY9qw==: 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:31.550 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.551 request: 00:25:31.551 { 00:25:31.551 "name": "nvme0", 00:25:31.551 "trtype": "tcp", 00:25:31.551 "traddr": "10.0.0.1", 00:25:31.551 "adrfam": "ipv4", 00:25:31.551 "trsvcid": "4420", 00:25:31.551 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:31.551 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:31.551 "prchk_reftag": false, 00:25:31.551 "prchk_guard": false, 00:25:31.551 "hdgst": false, 00:25:31.551 "ddgst": false, 00:25:31.551 "method": "bdev_nvme_attach_controller", 00:25:31.551 "req_id": 1 00:25:31.551 } 00:25:31.551 Got JSON-RPC error response 00:25:31.551 response: 00:25:31.551 { 00:25:31.551 "code": -5, 00:25:31.551 "message": "Input/output error" 00:25:31.551 } 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.551 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.810 request: 00:25:31.810 { 00:25:31.810 "name": "nvme0", 00:25:31.810 "trtype": "tcp", 00:25:31.810 "traddr": "10.0.0.1", 00:25:31.810 "adrfam": "ipv4", 00:25:31.810 "trsvcid": "4420", 00:25:31.810 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:31.810 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:31.810 "prchk_reftag": false, 00:25:31.810 "prchk_guard": false, 00:25:31.810 "hdgst": false, 00:25:31.810 "ddgst": false, 00:25:31.810 "dhchap_key": "key2", 00:25:31.810 "method": "bdev_nvme_attach_controller", 00:25:31.810 "req_id": 1 00:25:31.810 } 00:25:31.810 Got JSON-RPC error response 00:25:31.810 response: 00:25:31.810 { 00:25:31.810 "code": -5, 00:25:31.810 "message": "Input/output error" 00:25:31.810 } 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.810 request: 00:25:31.810 { 00:25:31.810 "name": "nvme0", 00:25:31.810 "trtype": "tcp", 00:25:31.810 "traddr": "10.0.0.1", 00:25:31.810 "adrfam": "ipv4", 00:25:31.810 "trsvcid": "4420", 00:25:31.810 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:31.810 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:31.810 "prchk_reftag": false, 00:25:31.810 "prchk_guard": false, 00:25:31.810 "hdgst": false, 00:25:31.810 "ddgst": false, 00:25:31.810 "dhchap_key": "key1", 00:25:31.810 "dhchap_ctrlr_key": "ckey2", 00:25:31.810 "method": "bdev_nvme_attach_controller", 00:25:31.810 "req_id": 1 00:25:31.810 } 00:25:31.810 Got JSON-RPC error response 00:25:31.810 response: 00:25:31.810 { 00:25:31.810 "code": -5, 00:25:31.810 "message": "Input/output error" 00:25:31.810 } 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:31.810 rmmod nvme_tcp 00:25:31.810 rmmod nvme_fabrics 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 131298 ']' 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 131298 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 131298 ']' 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 131298 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:31.810 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131298 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131298' 00:25:32.070 killing process with pid 131298 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 131298 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 131298 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.070 22:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:34.609 22:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:36.517 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:36.517 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:36.517 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:36.517 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:36.517 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:36.517 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:36.517 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:36.517 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:36.517 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:36.517 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:36.777 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:36.777 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:36.777 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:36.777 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:36.777 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:36.777 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:37.713 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:37.713 22:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Vt6 /tmp/spdk.key-null.AYi /tmp/spdk.key-sha256.FkP /tmp/spdk.key-sha384.izZ /tmp/spdk.key-sha512.dZ9 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:37.713 22:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:40.241 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:40.241 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:40.241 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:40.241 00:25:40.241 real 0m49.407s 00:25:40.241 user 0m44.600s 00:25:40.241 sys 0m11.522s 00:25:40.241 22:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:40.241 22:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.241 ************************************ 00:25:40.241 END TEST nvmf_auth_host 00:25:40.241 ************************************ 00:25:40.241 22:42:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:40.241 22:42:04 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:25:40.241 22:42:04 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:40.241 22:42:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:40.241 22:42:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.241 22:42:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.241 ************************************ 00:25:40.241 START TEST nvmf_digest 00:25:40.241 ************************************ 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:40.241 * Looking for test storage... 00:25:40.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.241 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:40.500 22:42:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:45.771 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:45.771 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:45.771 Found net devices under 0000:86:00.0: cvl_0_0 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.771 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:45.772 Found net devices under 0000:86:00.1: cvl_0_1 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:45.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:25:45.772 00:25:45.772 --- 10.0.0.2 ping statistics --- 00:25:45.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.772 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:25:45.772 00:25:45.772 --- 10.0.0.1 ping statistics --- 00:25:45.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.772 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:45.772 ************************************ 00:25:45.772 START TEST nvmf_digest_clean 00:25:45.772 ************************************ 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=144846 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 144846 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 144846 ']' 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:45.772 22:42:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:45.772 [2024-07-15 22:42:09.540370] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:25:45.772 [2024-07-15 22:42:09.540411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.772 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.772 [2024-07-15 22:42:09.598835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.772 [2024-07-15 22:42:09.677988] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.772 [2024-07-15 22:42:09.678022] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.772 [2024-07-15 22:42:09.678029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.772 [2024-07-15 22:42:09.678035] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.772 [2024-07-15 22:42:09.678041] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.772 [2024-07-15 22:42:09.678057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.709 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.709 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:46.709 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:46.709 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:46.709 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:46.709 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.709 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:46.710 null0 00:25:46.710 [2024-07-15 22:42:10.464694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.710 [2024-07-15 22:42:10.488857] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=145186 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 145186 /var/tmp/bperf.sock 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 145186 ']' 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:46.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.710 22:42:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:46.710 [2024-07-15 22:42:10.528711] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:25:46.710 [2024-07-15 22:42:10.528750] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145186 ] 00:25:46.710 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.710 [2024-07-15 22:42:10.581887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.710 [2024-07-15 22:42:10.659990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.717 22:42:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.717 22:42:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:47.717 22:42:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:47.717 22:42:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:47.717 22:42:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:47.717 22:42:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.717 22:42:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.976 nvme0n1 00:25:47.976 22:42:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:47.976 22:42:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:47.976 Running I/O for 2 seconds... 00:25:50.510 00:25:50.510 Latency(us) 00:25:50.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.510 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:50.510 nvme0n1 : 2.04 26856.86 104.91 0.00 0.00 4700.66 2251.02 45134.36 00:25:50.510 =================================================================================================================== 00:25:50.510 Total : 26856.86 104.91 0.00 0.00 4700.66 2251.02 45134.36 00:25:50.510 0 00:25:50.510 22:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:50.510 22:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:50.510 22:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:50.510 22:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:50.510 | select(.opcode=="crc32c") 00:25:50.510 | "\(.module_name) \(.executed)"' 00:25:50.510 22:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 145186 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 145186 ']' 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 145186 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 145186 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 145186' 00:25:50.510 killing process with pid 145186 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 145186 00:25:50.510 Received shutdown signal, test time was about 2.000000 seconds 00:25:50.510 00:25:50.510 Latency(us) 00:25:50.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.510 =================================================================================================================== 00:25:50.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 145186 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=145793 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 145793 /var/tmp/bperf.sock 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 145793 ']' 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:50.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:50.510 22:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:50.510 [2024-07-15 22:42:14.438526] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:25:50.510 [2024-07-15 22:42:14.438574] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145793 ] 00:25:50.510 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:50.510 Zero copy mechanism will not be used. 00:25:50.510 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.769 [2024-07-15 22:42:14.492960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.769 [2024-07-15 22:42:14.571989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.337 22:42:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:51.337 22:42:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:51.337 22:42:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:51.337 22:42:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:51.337 22:42:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:51.596 22:42:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.596 22:42:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:51.856 nvme0n1 00:25:52.115 22:42:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:52.115 22:42:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:52.115 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:52.115 Zero copy mechanism will not be used. 00:25:52.115 Running I/O for 2 seconds... 00:25:54.022 00:25:54.022 Latency(us) 00:25:54.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.022 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:54.022 nvme0n1 : 2.00 4019.79 502.47 0.00 0.00 3977.77 983.04 10599.74 00:25:54.022 =================================================================================================================== 00:25:54.022 Total : 4019.79 502.47 0.00 0.00 3977.77 983.04 10599.74 00:25:54.022 0 00:25:54.022 22:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:54.022 22:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:54.022 22:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:54.022 22:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:54.022 | select(.opcode=="crc32c") 00:25:54.022 | "\(.module_name) \(.executed)"' 00:25:54.022 22:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 145793 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 145793 ']' 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 145793 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 145793 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 145793' 00:25:54.282 killing process with pid 145793 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 145793 00:25:54.282 Received shutdown signal, test time was about 2.000000 seconds 00:25:54.282 00:25:54.282 Latency(us) 00:25:54.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.282 =================================================================================================================== 00:25:54.282 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.282 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 145793 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=146364 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 146364 /var/tmp/bperf.sock 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 146364 ']' 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:54.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:54.542 22:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:54.542 [2024-07-15 22:42:18.392024] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:25:54.542 [2024-07-15 22:42:18.392070] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146364 ] 00:25:54.542 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.542 [2024-07-15 22:42:18.445522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.802 [2024-07-15 22:42:18.513988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.371 22:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:55.371 22:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:55.371 22:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:55.371 22:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:55.371 22:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:55.629 22:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.629 22:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:55.887 nvme0n1 00:25:55.887 22:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:55.887 22:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.146 Running I/O for 2 seconds... 00:25:58.053 00:25:58.053 Latency(us) 00:25:58.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.053 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:58.053 nvme0n1 : 2.00 26899.73 105.08 0.00 0.00 4749.86 2393.49 14303.94 00:25:58.053 =================================================================================================================== 00:25:58.053 Total : 26899.73 105.08 0.00 0.00 4749.86 2393.49 14303.94 00:25:58.053 0 00:25:58.053 22:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:58.053 22:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:58.053 22:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:58.053 22:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:58.053 | select(.opcode=="crc32c") 00:25:58.053 | "\(.module_name) \(.executed)"' 00:25:58.053 22:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 146364 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 146364 ']' 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 146364 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 146364 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 146364' 00:25:58.313 killing process with pid 146364 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 146364 00:25:58.313 Received shutdown signal, test time was about 2.000000 seconds 00:25:58.313 00:25:58.313 Latency(us) 00:25:58.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.313 =================================================================================================================== 00:25:58.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.313 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 146364 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=147063 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 147063 /var/tmp/bperf.sock 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 147063 ']' 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:58.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:58.573 22:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:58.573 [2024-07-15 22:42:22.367232] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:25:58.573 [2024-07-15 22:42:22.367280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147063 ] 00:25:58.573 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:58.573 Zero copy mechanism will not be used. 00:25:58.573 EAL: No free 2048 kB hugepages reported on node 1 00:25:58.573 [2024-07-15 22:42:22.420542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.573 [2024-07-15 22:42:22.492416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.509 22:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:59.509 22:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:59.509 22:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:59.509 22:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:59.509 22:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:59.509 22:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.509 22:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:59.767 nvme0n1 00:25:59.767 22:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:59.767 22:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.025 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:00.025 Zero copy mechanism will not be used. 00:26:00.025 Running I/O for 2 seconds... 00:26:01.924 00:26:01.924 Latency(us) 00:26:01.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.924 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:01.924 nvme0n1 : 2.00 5348.79 668.60 0.00 0.00 2986.60 2008.82 10200.82 00:26:01.925 =================================================================================================================== 00:26:01.925 Total : 5348.79 668.60 0.00 0.00 2986.60 2008.82 10200.82 00:26:01.925 0 00:26:01.925 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:01.925 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:01.925 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:01.925 | select(.opcode=="crc32c") 00:26:01.925 | "\(.module_name) \(.executed)"' 00:26:01.925 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:01.925 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:02.183 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:02.183 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:02.183 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:02.183 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:02.183 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 147063 00:26:02.183 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 147063 ']' 00:26:02.183 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 147063 00:26:02.183 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:02.183 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:02.183 22:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 147063 00:26:02.183 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:02.183 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:02.183 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 147063' 00:26:02.183 killing process with pid 147063 00:26:02.183 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 147063 00:26:02.183 Received shutdown signal, test time was about 2.000000 seconds 00:26:02.183 00:26:02.183 Latency(us) 00:26:02.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.183 =================================================================================================================== 00:26:02.183 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.183 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 147063 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 144846 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 144846 ']' 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 144846 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144846 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144846' 00:26:02.442 killing process with pid 144846 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 144846 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 144846 00:26:02.442 00:26:02.442 real 0m16.922s 00:26:02.442 user 0m32.403s 00:26:02.442 sys 0m4.339s 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:02.442 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:02.442 ************************************ 00:26:02.442 END TEST nvmf_digest_clean 00:26:02.442 ************************************ 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:02.700 ************************************ 00:26:02.700 START TEST nvmf_digest_error 00:26:02.700 ************************************ 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=147784 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 147784 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 147784 ']' 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.700 22:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:02.700 [2024-07-15 22:42:26.522335] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:26:02.700 [2024-07-15 22:42:26.522374] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.700 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.700 [2024-07-15 22:42:26.578428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.700 [2024-07-15 22:42:26.656700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:02.700 [2024-07-15 22:42:26.656733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:02.700 [2024-07-15 22:42:26.656740] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:02.700 [2024-07-15 22:42:26.656746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:02.700 [2024-07-15 22:42:26.656751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:02.700 [2024-07-15 22:42:26.656775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.635 [2024-07-15 22:42:27.366833] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.635 null0 00:26:03.635 [2024-07-15 22:42:27.457091] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.635 [2024-07-15 22:42:27.481265] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=148027 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 148027 /var/tmp/bperf.sock 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 148027 ']' 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:03.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:03.635 22:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:03.635 [2024-07-15 22:42:27.530510] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:26:03.635 [2024-07-15 22:42:27.530549] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148027 ] 00:26:03.635 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.635 [2024-07-15 22:42:27.584168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.894 [2024-07-15 22:42:27.657352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:04.459 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:04.459 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:04.459 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:04.459 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:04.717 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:04.717 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.717 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:04.717 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.717 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.717 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:04.976 nvme0n1 00:26:04.976 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:04.976 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.976 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:04.976 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.976 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:04.976 22:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:04.976 Running I/O for 2 seconds... 00:26:05.235 [2024-07-15 22:42:28.955743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:28.955777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:28.955788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:28.966514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:28.966538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:28.966551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:28.976582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:28.976603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:28.976612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:28.986282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:28.986303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:28.986311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:28.994996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:28.995017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:28.995025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.003774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.003794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.003802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.013386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.013405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.013413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.023811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.023831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.023839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.032025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.032044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.032052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.042287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.042306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.042314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.052094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.052118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.052125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.059962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.059982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.059990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.070205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.070231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.070239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.079565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.079584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.079592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.089649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.089669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.089677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.098316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.098335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.098343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.108154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.108173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.108181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.117489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.117508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.117516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.126824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.126844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.126851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.136540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.136559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.136567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.145042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.145061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.145068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.155067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.155086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.155093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.164163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.164183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.164191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.173538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.173556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.173564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.183680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.183699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.183707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.191843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.191862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.191870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.236 [2024-07-15 22:42:29.201469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.236 [2024-07-15 22:42:29.201489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.236 [2024-07-15 22:42:29.201497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.496 [2024-07-15 22:42:29.212493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.496 [2024-07-15 22:42:29.212513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.496 [2024-07-15 22:42:29.212523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.496 [2024-07-15 22:42:29.221614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.496 [2024-07-15 22:42:29.221634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.496 [2024-07-15 22:42:29.221642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.496 [2024-07-15 22:42:29.231570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.496 [2024-07-15 22:42:29.231589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.496 [2024-07-15 22:42:29.231596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.496 [2024-07-15 22:42:29.240356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.496 [2024-07-15 22:42:29.240375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.496 [2024-07-15 22:42:29.240383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.496 [2024-07-15 22:42:29.250296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.496 [2024-07-15 22:42:29.250316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.496 [2024-07-15 22:42:29.250324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.496 [2024-07-15 22:42:29.259708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.496 [2024-07-15 22:42:29.259727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.496 [2024-07-15 22:42:29.259735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.496 [2024-07-15 22:42:29.268663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.496 [2024-07-15 22:42:29.268682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.496 [2024-07-15 22:42:29.268690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.278747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.278766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.278774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.288766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.288785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.288793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.296643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.296662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.296670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.306836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.306856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.306864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.316742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.316761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.316769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.325307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.325326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.325334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.335122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.335142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.335150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.345786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.345807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.345816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.354397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.354417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.354425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.365052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.365072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.365081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.373730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.373749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.373763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.383093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.383113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.383121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.393585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.393605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.393613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.403368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.403387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.403395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.411750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.411769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.411777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.421802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.421821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.421829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.431379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.431397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.431405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.440827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.440845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.440853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.450332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.450351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.450359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.497 [2024-07-15 22:42:29.458945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.497 [2024-07-15 22:42:29.458967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.497 [2024-07-15 22:42:29.458975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.467913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.467932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.467940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.477852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.477871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.477879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.487582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.487603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.487610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.497979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.497998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.498005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.506805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.506824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.506833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.516710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.516730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.516738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.525176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.525195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.525203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.536091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.536111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.536118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.545697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.545717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.545725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.554314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.554333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.554341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.563382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.563401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.563409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.572685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.572704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.572711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.582436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.582455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.582463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.592632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.592652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.592659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.759 [2024-07-15 22:42:29.601604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.759 [2024-07-15 22:42:29.601624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.759 [2024-07-15 22:42:29.601632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.610577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.610596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.610604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.620628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.620647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.620658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.629296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.629315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.629322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.639382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.639401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.639409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.648476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.648495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.648503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.658441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.658460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.658467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.668177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.668197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.668204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.677710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.677729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.677736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.685842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.685861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.685868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.696173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.696192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.696200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.705590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.705613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.705620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.714002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.714022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.714030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.760 [2024-07-15 22:42:29.724007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:05.760 [2024-07-15 22:42:29.724025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.760 [2024-07-15 22:42:29.724033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.054 [2024-07-15 22:42:29.733786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.733806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.733814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.743914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.743933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.743942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.753030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.753051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.753059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.763142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.763162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.763171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.771531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.771550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.771558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.781216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.781241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.781249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.792283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.792305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.792313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.800914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.800932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.800940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.812236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.812256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.812264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.821126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.821148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.821155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.830587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.830606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.830614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.840493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.840512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.840520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.850419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.850437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.850444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.859674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.859693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.859701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.868204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.868231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.868240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.877829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.877849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.877858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.887934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.887955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.887965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.897382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.897402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.897411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.907133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.907152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.907162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.917599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.917620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.917630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.926143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.926165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.926176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.935793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.935814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.935823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.945188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.945210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.945220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.955795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.955816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.955827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.965041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.965061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.965070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.975564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.975586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.975595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.055 [2024-07-15 22:42:29.984085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.055 [2024-07-15 22:42:29.984107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.055 [2024-07-15 22:42:29.984117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.056 [2024-07-15 22:42:29.994247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.056 [2024-07-15 22:42:29.994269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.056 [2024-07-15 22:42:29.994279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.056 [2024-07-15 22:42:30.002996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.056 [2024-07-15 22:42:30.003016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.056 [2024-07-15 22:42:30.003026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.056 [2024-07-15 22:42:30.014907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.056 [2024-07-15 22:42:30.014928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.056 [2024-07-15 22:42:30.014938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.315 [2024-07-15 22:42:30.023486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.315 [2024-07-15 22:42:30.023508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.315 [2024-07-15 22:42:30.023518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.315 [2024-07-15 22:42:30.035774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.315 [2024-07-15 22:42:30.035802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.315 [2024-07-15 22:42:30.035821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.315 [2024-07-15 22:42:30.044928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.315 [2024-07-15 22:42:30.044959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.315 [2024-07-15 22:42:30.044974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.315 [2024-07-15 22:42:30.055746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.315 [2024-07-15 22:42:30.055774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.055787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.067501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.067527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.067542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.077652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.077678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.077694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.089273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.089301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.089316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.098675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.098699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.098709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.109930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.109952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.109962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.120702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.120724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.120734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.128793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.128817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.128827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.138876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.138897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.138907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.148770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.148791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.148800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.159700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.159721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.159730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.168714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.168735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.168744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.180094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.180115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.180126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.189839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.189858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.189866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.198537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.198556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.198564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.208345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.208364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.208373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.217181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.217203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.217211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.227742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.227763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.227772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.237223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.237250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.237259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.246269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.246291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.246300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.254509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.254530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.254539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.265445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.265466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.265474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.274336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.274358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.274367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.316 [2024-07-15 22:42:30.284308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.316 [2024-07-15 22:42:30.284329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.316 [2024-07-15 22:42:30.284338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.576 [2024-07-15 22:42:30.293814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.576 [2024-07-15 22:42:30.293836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.576 [2024-07-15 22:42:30.293848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.576 [2024-07-15 22:42:30.303254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.576 [2024-07-15 22:42:30.303274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.576 [2024-07-15 22:42:30.303283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.576 [2024-07-15 22:42:30.312160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.576 [2024-07-15 22:42:30.312182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.576 [2024-07-15 22:42:30.312191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.576 [2024-07-15 22:42:30.322498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.576 [2024-07-15 22:42:30.322518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.576 [2024-07-15 22:42:30.322527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.576 [2024-07-15 22:42:30.330581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.576 [2024-07-15 22:42:30.330602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.576 [2024-07-15 22:42:30.330610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.576 [2024-07-15 22:42:30.341380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.576 [2024-07-15 22:42:30.341401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.576 [2024-07-15 22:42:30.341409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.576 [2024-07-15 22:42:30.350828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.576 [2024-07-15 22:42:30.350848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.576 [2024-07-15 22:42:30.350857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.576 [2024-07-15 22:42:30.360158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.576 [2024-07-15 22:42:30.360178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.576 [2024-07-15 22:42:30.360186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.369076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.369095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.369103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.378650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.378670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.378678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.387344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.387365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.387373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.398180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.398200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.398208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.406346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.406366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.406374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.416983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.417004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.417012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.426297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.426317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.426326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.436437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.436458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.436466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.445076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.445096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.445105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.455632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.455654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.455666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.464857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.464878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.464888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.474194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.474215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.474223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.484398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.484418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.484427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.493186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.493206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.493214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.503779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.503800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.503808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.513989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.514010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.514019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.523391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.523412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.523420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.533043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.533063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.533071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.577 [2024-07-15 22:42:30.541704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.577 [2024-07-15 22:42:30.541728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.577 [2024-07-15 22:42:30.541736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.837 [2024-07-15 22:42:30.550972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.837 [2024-07-15 22:42:30.550992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.837 [2024-07-15 22:42:30.551000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.837 [2024-07-15 22:42:30.560620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.837 [2024-07-15 22:42:30.560641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.837 [2024-07-15 22:42:30.560649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.837 [2024-07-15 22:42:30.569671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.837 [2024-07-15 22:42:30.569692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.837 [2024-07-15 22:42:30.569701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.837 [2024-07-15 22:42:30.579397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.837 [2024-07-15 22:42:30.579418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.837 [2024-07-15 22:42:30.579426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.837 [2024-07-15 22:42:30.588962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.837 [2024-07-15 22:42:30.588982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.837 [2024-07-15 22:42:30.588990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.837 [2024-07-15 22:42:30.598190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.837 [2024-07-15 22:42:30.598210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.837 [2024-07-15 22:42:30.598219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.837 [2024-07-15 22:42:30.606920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.837 [2024-07-15 22:42:30.606941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.837 [2024-07-15 22:42:30.606949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.837 [2024-07-15 22:42:30.616653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.837 [2024-07-15 22:42:30.616673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.837 [2024-07-15 22:42:30.616681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.837 [2024-07-15 22:42:30.626093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.837 [2024-07-15 22:42:30.626113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.837 [2024-07-15 22:42:30.626123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.837 [2024-07-15 22:42:30.635171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.837 [2024-07-15 22:42:30.635192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.837 [2024-07-15 22:42:30.635201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.837 [2024-07-15 22:42:30.644606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.837 [2024-07-15 22:42:30.644627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.644636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.653624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.653644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.653652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.663581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.663601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.663610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.672013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.672033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.672041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.681829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.681850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.681858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.690956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.690976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.690984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.701750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.701771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.701782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.709789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.709810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.709818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.719443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.719464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.719473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.729451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.729471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.729479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.738413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.738434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.738442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.747614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.747634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.747642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.758098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.758119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.758127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.766583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.766604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.766612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.776482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.776502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.776511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.785380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.785404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.785413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.794799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.794819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.794827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.838 [2024-07-15 22:42:30.805444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:06.838 [2024-07-15 22:42:30.805464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.838 [2024-07-15 22:42:30.805472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.097 [2024-07-15 22:42:30.813359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.097 [2024-07-15 22:42:30.813379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.097 [2024-07-15 22:42:30.813388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.097 [2024-07-15 22:42:30.823776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.823798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.823806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.833178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.833199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.833207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.842025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.842046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.842055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.852229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.852249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.852257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.860186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.860205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.860218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.871471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.871492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.871500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.881654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.881674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.881684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.890489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.890509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.890517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.901540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.901560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.901568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.911154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.911174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.911182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.919705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.919725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.919733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.929842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.929863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.929871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.939800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.939822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.939830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 [2024-07-15 22:42:30.947352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d9df20) 00:26:07.098 [2024-07-15 22:42:30.947376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.098 [2024-07-15 22:42:30.947384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.098 00:26:07.098 Latency(us) 00:26:07.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.098 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:07.098 nvme0n1 : 2.00 26636.78 104.05 0.00 0.00 4799.90 2279.51 13905.03 00:26:07.098 =================================================================================================================== 00:26:07.098 Total : 26636.78 104.05 0.00 0.00 4799.90 2279.51 13905.03 00:26:07.098 0 00:26:07.098 22:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:07.098 22:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:07.098 22:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:07.098 | .driver_specific 00:26:07.098 | .nvme_error 00:26:07.098 | .status_code 00:26:07.098 | .command_transient_transport_error' 00:26:07.098 22:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:07.357 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 209 > 0 )) 00:26:07.357 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 148027 00:26:07.357 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 148027 ']' 00:26:07.357 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 148027 00:26:07.357 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:07.357 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:07.357 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148027 00:26:07.357 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:07.357 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:07.357 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148027' 00:26:07.357 killing process with pid 148027 00:26:07.357 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 148027 00:26:07.357 Received shutdown signal, test time was about 2.000000 seconds 00:26:07.357 00:26:07.357 Latency(us) 00:26:07.357 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.357 =================================================================================================================== 00:26:07.357 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.357 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 148027 00:26:07.616 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:07.616 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:07.616 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:07.616 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:07.616 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:07.616 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=148533 00:26:07.616 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 148533 /var/tmp/bperf.sock 00:26:07.616 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:07.616 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 148533 ']' 00:26:07.616 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:07.616 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:07.616 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:07.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:07.617 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:07.617 22:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:07.617 [2024-07-15 22:42:31.418613] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:26:07.617 [2024-07-15 22:42:31.418662] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148533 ] 00:26:07.617 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:07.617 Zero copy mechanism will not be used. 00:26:07.617 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.617 [2024-07-15 22:42:31.474562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.617 [2024-07-15 22:42:31.549950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.554 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:08.554 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:08.554 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:08.554 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:08.554 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:08.554 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.554 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:08.554 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.554 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.554 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:08.812 nvme0n1 00:26:09.071 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:09.071 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.071 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:09.072 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.072 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:09.072 22:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:09.072 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.072 Zero copy mechanism will not be used. 00:26:09.072 Running I/O for 2 seconds... 00:26:09.072 [2024-07-15 22:42:32.903820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.903862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.903873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.913338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.913365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.913375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.921602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.921624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.921633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.929174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.929195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.929203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.936301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.936322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.936330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.942875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.942895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.942903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.949264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.949284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.949293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.955501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.955521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.955529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.961782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.961801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.961809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.967964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.967984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.967992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.973990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.974011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.974019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.979996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.980017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.980026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.985853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.985874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.985882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.991720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.991740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.991748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:32.997602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:32.997623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:32.997631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:33.003328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:33.003347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:33.003355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:33.009027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:33.009047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:33.009055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:33.014816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:33.014839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:33.014848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:33.020498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:33.020519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:33.020528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:33.026090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:33.026110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:33.026118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:33.031728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:33.031749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:33.031757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.072 [2024-07-15 22:42:33.037614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.072 [2024-07-15 22:42:33.037635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.072 [2024-07-15 22:42:33.037644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 22:42:33.043445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.333 [2024-07-15 22:42:33.043466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 22:42:33.043475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 22:42:33.049172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.333 [2024-07-15 22:42:33.049192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 22:42:33.049201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 22:42:33.055068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.333 [2024-07-15 22:42:33.055089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 22:42:33.055097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 22:42:33.060805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.333 [2024-07-15 22:42:33.060825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 22:42:33.060834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 22:42:33.066503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.333 [2024-07-15 22:42:33.066523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 22:42:33.066531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 22:42:33.072401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.333 [2024-07-15 22:42:33.072422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 22:42:33.072430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 22:42:33.078250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.333 [2024-07-15 22:42:33.078270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 22:42:33.078278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 22:42:33.083916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.333 [2024-07-15 22:42:33.083936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 22:42:33.083944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 22:42:33.089604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.333 [2024-07-15 22:42:33.089624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 22:42:33.089633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 22:42:33.095417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.333 [2024-07-15 22:42:33.095437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.333 [2024-07-15 22:42:33.095445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.333 [2024-07-15 22:42:33.101151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.101171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.101179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.106746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.106766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.106774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.113488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.113510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.113522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.119211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.119239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.119247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.124767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.124787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.124795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.130441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.130461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.130469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.136205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.136231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.136240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.141921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.141941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.141949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.147533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.147553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.147561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.153231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.153251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.153259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.158970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.158990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.158997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.164659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.164682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.164690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.170438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.170458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.170467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.176223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.176249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.176257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.181959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.181979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.181987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.188489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.188509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.188517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.198169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.198189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.198196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.207250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.207269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.207276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.215491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.215511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.215519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.224343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.224363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.224374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.233705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.233724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.233732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.242105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.242125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.242133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.250348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.250367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.250375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.258222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.258246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.258254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.266634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.266654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.266662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.274337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.274357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.274365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.282541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.282561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.282570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.289682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.289701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.289709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.334 [2024-07-15 22:42:33.299955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.334 [2024-07-15 22:42:33.299978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.334 [2024-07-15 22:42:33.299987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 22:42:33.310767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.594 [2024-07-15 22:42:33.310788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 22:42:33.310797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 22:42:33.321108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.594 [2024-07-15 22:42:33.321129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 22:42:33.321138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 22:42:33.331399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.594 [2024-07-15 22:42:33.331419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 22:42:33.331427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 22:42:33.341540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.594 [2024-07-15 22:42:33.341560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 22:42:33.341568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 22:42:33.351131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.594 [2024-07-15 22:42:33.351151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 22:42:33.351159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 22:42:33.360895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.594 [2024-07-15 22:42:33.360914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.594 [2024-07-15 22:42:33.360922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.594 [2024-07-15 22:42:33.369286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.369306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.369314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.379100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.379120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.379129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.387448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.387469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.387478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.398315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.398336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.398344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.409201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.409221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.409235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.419790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.419813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.419821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.430142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.430163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.430171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.441546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.441566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.441575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.452672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.452692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.452700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.463604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.463626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.463634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.473663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.473685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.473697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.484585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.484606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.484615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.494866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.494887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.494896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.506197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.506217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.506232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.518292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.518314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.518323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.529497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.529517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.529526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.540104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.540123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.540131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.549919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.549939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.549947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.595 [2024-07-15 22:42:33.558862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.595 [2024-07-15 22:42:33.558882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.595 [2024-07-15 22:42:33.558907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.568138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.568163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.568172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.579646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.579666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.579675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.591010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.591032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.591041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.602200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.602220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.602234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.611810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.611832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.611841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.622825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.622847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.622855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.634477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.634497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.634506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.646355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.646376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.646384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.655653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.655674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.655688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.665603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.665625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.665634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.675705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.675726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.675735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.685510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.685532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.685541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.695390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.695422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.695431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.706038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.706059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.706067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.716090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.716109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.716117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.724714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.724734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.724742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.732512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.732531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.732539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.740495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.740520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.740528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.747480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.747499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.747507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.754024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.855 [2024-07-15 22:42:33.754043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.855 [2024-07-15 22:42:33.754051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.855 [2024-07-15 22:42:33.760939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.856 [2024-07-15 22:42:33.760959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.856 [2024-07-15 22:42:33.760967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.856 [2024-07-15 22:42:33.766818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.856 [2024-07-15 22:42:33.766838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.856 [2024-07-15 22:42:33.766846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.856 [2024-07-15 22:42:33.772875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.856 [2024-07-15 22:42:33.772894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.856 [2024-07-15 22:42:33.772901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.856 [2024-07-15 22:42:33.780486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.856 [2024-07-15 22:42:33.780505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.856 [2024-07-15 22:42:33.780513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:09.856 [2024-07-15 22:42:33.790452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.856 [2024-07-15 22:42:33.790471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.856 [2024-07-15 22:42:33.790479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:09.856 [2024-07-15 22:42:33.798931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.856 [2024-07-15 22:42:33.798957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.856 [2024-07-15 22:42:33.798965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:09.856 [2024-07-15 22:42:33.806757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.856 [2024-07-15 22:42:33.806776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.856 [2024-07-15 22:42:33.806784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:09.856 [2024-07-15 22:42:33.816938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:09.856 [2024-07-15 22:42:33.816958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:09.856 [2024-07-15 22:42:33.816966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.826346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.826368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.826376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.834563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.834582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.834590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.842527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.842547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.842555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.849606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.849625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.849633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.856159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.856178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.856186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.862793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.862813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.862822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.871126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.871146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.871157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.881497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.881517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.881526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.890886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.890907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.890915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.899782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.899802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.899810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.908518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.908538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.908546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.917604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.917624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.917632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.926119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.926140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.926148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.936035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.936055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.936064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.945310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.945330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.945338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.954174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.954195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.954203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.963854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.963875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.963884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.971402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.971422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.116 [2024-07-15 22:42:33.971430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.116 [2024-07-15 22:42:33.979608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.116 [2024-07-15 22:42:33.979629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.117 [2024-07-15 22:42:33.979637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.117 [2024-07-15 22:42:33.987132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.117 [2024-07-15 22:42:33.987152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.117 [2024-07-15 22:42:33.987161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.117 [2024-07-15 22:42:33.995603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.117 [2024-07-15 22:42:33.995624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.117 [2024-07-15 22:42:33.995632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.117 [2024-07-15 22:42:34.004682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.117 [2024-07-15 22:42:34.004702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.117 [2024-07-15 22:42:34.004710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.117 [2024-07-15 22:42:34.011998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.117 [2024-07-15 22:42:34.012017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.117 [2024-07-15 22:42:34.012026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.117 [2024-07-15 22:42:34.019272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.117 [2024-07-15 22:42:34.019292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.117 [2024-07-15 22:42:34.019304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.117 [2024-07-15 22:42:34.029738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.117 [2024-07-15 22:42:34.029757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.117 [2024-07-15 22:42:34.029765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.117 [2024-07-15 22:42:34.039761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.117 [2024-07-15 22:42:34.039780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.117 [2024-07-15 22:42:34.039789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.117 [2024-07-15 22:42:34.050268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.117 [2024-07-15 22:42:34.050288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.117 [2024-07-15 22:42:34.050297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.117 [2024-07-15 22:42:34.059977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.117 [2024-07-15 22:42:34.060000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.117 [2024-07-15 22:42:34.060009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.117 [2024-07-15 22:42:34.069812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.117 [2024-07-15 22:42:34.069834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.117 [2024-07-15 22:42:34.069843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.117 [2024-07-15 22:42:34.080099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.117 [2024-07-15 22:42:34.080121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.117 [2024-07-15 22:42:34.080129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.089590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.089613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.089622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.099469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.099490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.099498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.110613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.110637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.110645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.121205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.121230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.121238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.131400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.131420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.131428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.139385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.139405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.139413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.147516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.147536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.147544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.155352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.155372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.155380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.162194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.162214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.162222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.169003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.169023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.169031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.174930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.174951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.174958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.180724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.180746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.180754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.186547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.186567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.186575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.192468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.192488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.192496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.198093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.198114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.198121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.203841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.203862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.203870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.209600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.209621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.209631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.215065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.215087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.215096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.220506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.220528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.220536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.226206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.226234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.377 [2024-07-15 22:42:34.226246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.377 [2024-07-15 22:42:34.231888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.377 [2024-07-15 22:42:34.231910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.231918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.237592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.237613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.237622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.243377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.243397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.243406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.249017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.249036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.249044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.254724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.254745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.254754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.260349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.260369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.260377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.265990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.266011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.266019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.271622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.271643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.271651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.277897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.277921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.277929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.284098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.284120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.284129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.291245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.291268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.291276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.299383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.299406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.299415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.307826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.307849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.307857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.316533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.316554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.316562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.324832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.324853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.324862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.333031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.333053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.333061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.378 [2024-07-15 22:42:34.342052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.378 [2024-07-15 22:42:34.342074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.378 [2024-07-15 22:42:34.342087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.637 [2024-07-15 22:42:34.351488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.351522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.351530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.360793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.360815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.360823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.370075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.370097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.370106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.379530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.379552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.379560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.388051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.388072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.388081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.397155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.397177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.397186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.405893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.405914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.405921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.414440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.414463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.414471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.422291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.422317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.422325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.429667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.429688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.429697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.436677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.436699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.436708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.443495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.443516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.443524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.450199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.450221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.450236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.456382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.456403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.456411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.462550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.462570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.462577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.468505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.468526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.468534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.474415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.474436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.474444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.481062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.481085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.481093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.487647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.487669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.487678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.494922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.638 [2024-07-15 22:42:34.494944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.638 [2024-07-15 22:42:34.494952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.638 [2024-07-15 22:42:34.502375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.639 [2024-07-15 22:42:34.502396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.639 [2024-07-15 22:42:34.502404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.639 [2024-07-15 22:42:34.509191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.639 [2024-07-15 22:42:34.509212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.639 [2024-07-15 22:42:34.509220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.639 [2024-07-15 22:42:34.517222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.639 [2024-07-15 22:42:34.517250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.639 [2024-07-15 22:42:34.517259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.639 [2024-07-15 22:42:34.525712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.639 [2024-07-15 22:42:34.525735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.639 [2024-07-15 22:42:34.525744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.639 [2024-07-15 22:42:34.534839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.639 [2024-07-15 22:42:34.534861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.639 [2024-07-15 22:42:34.534870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.639 [2024-07-15 22:42:34.544542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.639 [2024-07-15 22:42:34.544564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.639 [2024-07-15 22:42:34.544575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.639 [2024-07-15 22:42:34.553749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.639 [2024-07-15 22:42:34.553772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.639 [2024-07-15 22:42:34.553780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.639 [2024-07-15 22:42:34.563197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.639 [2024-07-15 22:42:34.563220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.639 [2024-07-15 22:42:34.563235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.639 [2024-07-15 22:42:34.572811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.639 [2024-07-15 22:42:34.572833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.639 [2024-07-15 22:42:34.572842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.639 [2024-07-15 22:42:34.583865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.639 [2024-07-15 22:42:34.583887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.639 [2024-07-15 22:42:34.583895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.639 [2024-07-15 22:42:34.594405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.639 [2024-07-15 22:42:34.594427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.639 [2024-07-15 22:42:34.594436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.639 [2024-07-15 22:42:34.605640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.639 [2024-07-15 22:42:34.605662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.639 [2024-07-15 22:42:34.605671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.898 [2024-07-15 22:42:34.615990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.898 [2024-07-15 22:42:34.616013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.898 [2024-07-15 22:42:34.616022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.898 [2024-07-15 22:42:34.626871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.898 [2024-07-15 22:42:34.626894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.626903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.637254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.637280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.637288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.650294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.650316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.650325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.660386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.660408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.660417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.669611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.669632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.669641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.677708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.677729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.677737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.685800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.685822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.685830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.694623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.694646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.694654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.703210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.703239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.703248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.712345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.712367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.712376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.722928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.722949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.722957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.732030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.732051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.732059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.741250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.741272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.741280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.750196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.750218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.750232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.759553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.759574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.759583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.768293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.768315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.768324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.776775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.776796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.776804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.785657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.785679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.785687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.794410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.794432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.794443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.804045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.804067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.804076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.814283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.814305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.814314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.824714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.824737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.824745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.835710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.835732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.835740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.846080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.846101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.846110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.856166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.856188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.856196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.899 [2024-07-15 22:42:34.865899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:10.899 [2024-07-15 22:42:34.865921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.899 [2024-07-15 22:42:34.865929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.158 [2024-07-15 22:42:34.876642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:11.158 [2024-07-15 22:42:34.876665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.158 [2024-07-15 22:42:34.876674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.158 [2024-07-15 22:42:34.886464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d370b0) 00:26:11.158 [2024-07-15 22:42:34.886489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.158 [2024-07-15 22:42:34.886497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.158 00:26:11.158 Latency(us) 00:26:11.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.158 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:11.158 nvme0n1 : 2.00 3757.70 469.71 0.00 0.00 4254.90 758.65 15614.66 00:26:11.158 =================================================================================================================== 00:26:11.158 Total : 3757.70 469.71 0.00 0.00 4254.90 758.65 15614.66 00:26:11.158 0 00:26:11.158 22:42:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:11.158 22:42:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:11.158 22:42:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:11.158 | .driver_specific 00:26:11.158 | .nvme_error 00:26:11.158 | .status_code 00:26:11.158 | .command_transient_transport_error' 00:26:11.158 22:42:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:11.158 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 242 > 0 )) 00:26:11.158 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 148533 00:26:11.158 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 148533 ']' 00:26:11.158 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 148533 00:26:11.158 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:11.158 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:11.158 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148533 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148533' 00:26:11.416 killing process with pid 148533 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 148533 00:26:11.416 Received shutdown signal, test time was about 2.000000 seconds 00:26:11.416 00:26:11.416 Latency(us) 00:26:11.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.416 =================================================================================================================== 00:26:11.416 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 148533 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=149212 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 149212 /var/tmp/bperf.sock 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 149212 ']' 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:11.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:11.416 22:42:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:11.416 [2024-07-15 22:42:35.361752] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:26:11.416 [2024-07-15 22:42:35.361799] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149212 ] 00:26:11.416 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.675 [2024-07-15 22:42:35.415985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.675 [2024-07-15 22:42:35.494758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.244 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:12.244 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:12.244 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:12.244 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:12.503 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:12.503 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.503 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.503 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.503 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.503 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.761 nvme0n1 00:26:12.761 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:12.761 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.761 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:12.761 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.761 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:12.761 22:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:12.761 Running I/O for 2 seconds... 00:26:12.761 [2024-07-15 22:42:36.708413] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:12.761 [2024-07-15 22:42:36.708647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.761 [2024-07-15 22:42:36.708675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.761 [2024-07-15 22:42:36.717982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:12.761 [2024-07-15 22:42:36.718199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.761 [2024-07-15 22:42:36.718220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.761 [2024-07-15 22:42:36.727607] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:12.761 [2024-07-15 22:42:36.727831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:12.762 [2024-07-15 22:42:36.727849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.020 [2024-07-15 22:42:36.737345] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.020 [2024-07-15 22:42:36.737563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.020 [2024-07-15 22:42:36.737581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.020 [2024-07-15 22:42:36.747043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.020 [2024-07-15 22:42:36.747255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.020 [2024-07-15 22:42:36.747273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.020 [2024-07-15 22:42:36.756556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.020 [2024-07-15 22:42:36.756766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.020 [2024-07-15 22:42:36.756784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.020 [2024-07-15 22:42:36.766023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.020 [2024-07-15 22:42:36.766245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.020 [2024-07-15 22:42:36.766264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.020 [2024-07-15 22:42:36.775568] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.020 [2024-07-15 22:42:36.775781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.020 [2024-07-15 22:42:36.775800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.020 [2024-07-15 22:42:36.785071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.020 [2024-07-15 22:42:36.785285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.020 [2024-07-15 22:42:36.785302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.020 [2024-07-15 22:42:36.794565] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.020 [2024-07-15 22:42:36.794784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.020 [2024-07-15 22:42:36.794802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.020 [2024-07-15 22:42:36.804072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.020 [2024-07-15 22:42:36.804295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.020 [2024-07-15 22:42:36.804313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.020 [2024-07-15 22:42:36.813552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.020 [2024-07-15 22:42:36.813763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.020 [2024-07-15 22:42:36.813781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.020 [2024-07-15 22:42:36.823033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.020 [2024-07-15 22:42:36.823247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.020 [2024-07-15 22:42:36.823265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.020 [2024-07-15 22:42:36.832536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.020 [2024-07-15 22:42:36.832749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.020 [2024-07-15 22:42:36.832766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.020 [2024-07-15 22:42:36.842006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.020 [2024-07-15 22:42:36.842217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.020 [2024-07-15 22:42:36.842238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.851483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.851694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.851711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.860962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.861172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.861189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.870435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.870645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.870662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.879900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.880110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.880128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.889402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.889614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.889631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.898875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.899091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.899109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.908371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.908582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.908600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.917855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.918064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.918082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.927308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.927520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.927537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.936782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.936993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.937011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.946282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.946496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.946513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.955967] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.956181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.956203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.965456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.965666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.965684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.974944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.975156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.975173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.021 [2024-07-15 22:42:36.984493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.021 [2024-07-15 22:42:36.984703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.021 [2024-07-15 22:42:36.984721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.279 [2024-07-15 22:42:36.994246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.279 [2024-07-15 22:42:36.994475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.279 [2024-07-15 22:42:36.994493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.003936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.004149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.004167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.013451] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.013662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.013680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.022952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.023165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.023182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.032470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.032684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.032701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.041924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.042140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.042158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.051427] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.051638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.051656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.060886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.061096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.061113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.070388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.070600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.070620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.079863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.080075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.080093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.089397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.089609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.089626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.098861] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.099129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.099146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.108507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.108788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.108806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.118283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.118481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.118498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.127818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.128056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.128074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.137309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.137524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.137541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.146812] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.147062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.147080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.156278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.156491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.156508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.165755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.165967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.165984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.175199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.175437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.175454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.184698] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.184912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.184929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.194177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.194399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.194416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.203659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.203927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.203945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.213147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.213413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.213431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.222644] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.222912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.222930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.232129] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.232349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.232367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.280 [2024-07-15 22:42:37.241625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.280 [2024-07-15 22:42:37.241840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.280 [2024-07-15 22:42:37.241857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.251322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.251548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.251566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.260970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.261183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.261201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.270527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.270739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.270756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.280010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.280231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.280249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.289571] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.289811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.289831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.299062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.299313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.299330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.308554] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.308817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.308835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.318075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.318306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.318324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.327566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.327781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.327799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.337061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.337311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.337329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.346549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.346765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.346782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.356024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.356242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.356259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.365524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.365761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.365780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.375023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.375239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.375257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.539 [2024-07-15 22:42:37.384493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.539 [2024-07-15 22:42:37.384762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.539 [2024-07-15 22:42:37.384779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.393979] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.394191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.394208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.403456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.403667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.403685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.412955] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.413168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.413185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.422468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.422682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.422700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.431925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.432174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.432192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.441443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.441709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.441727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.450901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.451113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.451130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.460412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.460625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.460642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.469881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.470093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.470111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.479400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.479664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.479682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.488866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.489131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.489148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.498392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.498634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.498652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.540 [2024-07-15 22:42:37.507968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.540 [2024-07-15 22:42:37.508239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.540 [2024-07-15 22:42:37.508257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.517773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.517985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.518002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.527329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.527538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.527556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.536815] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.537026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.537047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.546320] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.546530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.546547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.555831] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.556041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.556058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.565312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.565527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.565545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.574788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.574999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.575016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.584256] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.584466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.584484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.593729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.593941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.593958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.603210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.603427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.603444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.612678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.612886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.612903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.622165] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.622388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.622405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.631659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.631869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.631886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.641142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.641360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.641378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.650634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.650846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.650863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.660095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.660315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.660333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.669597] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.669815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.669832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.679063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.679273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.679290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.688550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.688758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.688776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.698040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.698254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.698272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.707521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.707731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.707748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.716992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.717201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.717218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.726476] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.726685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.799 [2024-07-15 22:42:37.726702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.799 [2024-07-15 22:42:37.736018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.799 [2024-07-15 22:42:37.736232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.800 [2024-07-15 22:42:37.736250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.800 [2024-07-15 22:42:37.745505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.800 [2024-07-15 22:42:37.745715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.800 [2024-07-15 22:42:37.745733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.800 [2024-07-15 22:42:37.754977] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.800 [2024-07-15 22:42:37.755188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.800 [2024-07-15 22:42:37.755205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:13.800 [2024-07-15 22:42:37.764513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:13.800 [2024-07-15 22:42:37.764729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:13.800 [2024-07-15 22:42:37.764746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.058 [2024-07-15 22:42:37.774294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.058 [2024-07-15 22:42:37.774506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.058 [2024-07-15 22:42:37.774524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.058 [2024-07-15 22:42:37.783757] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.058 [2024-07-15 22:42:37.783969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.058 [2024-07-15 22:42:37.783987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.058 [2024-07-15 22:42:37.793495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.058 [2024-07-15 22:42:37.793707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.058 [2024-07-15 22:42:37.793726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.058 [2024-07-15 22:42:37.802983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.058 [2024-07-15 22:42:37.803190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.058 [2024-07-15 22:42:37.803207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.058 [2024-07-15 22:42:37.812450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.058 [2024-07-15 22:42:37.812665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.058 [2024-07-15 22:42:37.812683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.058 [2024-07-15 22:42:37.821917] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.058 [2024-07-15 22:42:37.822127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.058 [2024-07-15 22:42:37.822145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.058 [2024-07-15 22:42:37.831401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.058 [2024-07-15 22:42:37.831612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.831629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.840857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.841068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.841086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.850349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.850563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.850581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.859784] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.859992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.860009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.869290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.869501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.869522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.878743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.878955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.878973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.888111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.888329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.888346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.897577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.897787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.897805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.907063] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.907274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.907292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.916493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.916705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.916722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.925985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.926195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.926212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.935523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.935732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.935749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.945062] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.945273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.945292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.954714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.954931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.954949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.964199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.964421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.964439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.973626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.973850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.973867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.983160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.983382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.983400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:37.992626] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:37.992843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:37.992861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:38.002176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:38.002458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:38.002475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:38.011688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:38.011919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:38.011937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.059 [2024-07-15 22:42:38.021158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.059 [2024-07-15 22:42:38.021372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.059 [2024-07-15 22:42:38.021391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.031023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.031222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.031244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.040614] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.040832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.040851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.050201] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.050492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.050510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.059717] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.059926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.059944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.069235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.069516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.069534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.078787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.078998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.079015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.088246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.088459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.088476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.097732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.097946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.097963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.107212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.107429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.107446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.116674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.116889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.116909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.126136] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.126354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.126372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.135636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.135846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.135865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.145117] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.145341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.145359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.154609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.154820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.154837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.164080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.164292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.164310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.173547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.173759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.173778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.183033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.183251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.183269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.192505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.192715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.192732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.201925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.202136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.202157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.211402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.211611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.211628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.318 [2024-07-15 22:42:38.220865] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.318 [2024-07-15 22:42:38.221079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.318 [2024-07-15 22:42:38.221097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.319 [2024-07-15 22:42:38.230361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.319 [2024-07-15 22:42:38.230574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.319 [2024-07-15 22:42:38.230591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.319 [2024-07-15 22:42:38.239832] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.319 [2024-07-15 22:42:38.240042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.319 [2024-07-15 22:42:38.240060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.319 [2024-07-15 22:42:38.249288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.319 [2024-07-15 22:42:38.249498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.319 [2024-07-15 22:42:38.249516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.319 [2024-07-15 22:42:38.258816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.319 [2024-07-15 22:42:38.259030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.319 [2024-07-15 22:42:38.259048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.319 [2024-07-15 22:42:38.268420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.319 [2024-07-15 22:42:38.268630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.319 [2024-07-15 22:42:38.268647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.319 [2024-07-15 22:42:38.277907] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.319 [2024-07-15 22:42:38.278118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.319 [2024-07-15 22:42:38.278136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.287763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.287992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.288009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.297410] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.297622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.297639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.306942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.307155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.307174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.316484] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.316697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.316715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.325950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.326162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.326179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.335454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.335665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.335683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.344927] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.345139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.345157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.354358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.354581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.354598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.363853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.364062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.364080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.373307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.373519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.373536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.382781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.382993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.383010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.392262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.392475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.392492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.401730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.401942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.401959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.411196] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.411414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.411431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.420658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.420871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.420888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.430128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.430347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.578 [2024-07-15 22:42:38.430365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.578 [2024-07-15 22:42:38.439599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.578 [2024-07-15 22:42:38.439809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.579 [2024-07-15 22:42:38.439827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.579 [2024-07-15 22:42:38.449055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.579 [2024-07-15 22:42:38.449265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.579 [2024-07-15 22:42:38.449286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.579 [2024-07-15 22:42:38.458539] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.579 [2024-07-15 22:42:38.458748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.579 [2024-07-15 22:42:38.458766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.579 [2024-07-15 22:42:38.467995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.579 [2024-07-15 22:42:38.468208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.579 [2024-07-15 22:42:38.468230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.579 [2024-07-15 22:42:38.477457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.579 [2024-07-15 22:42:38.477669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.579 [2024-07-15 22:42:38.477686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.579 [2024-07-15 22:42:38.486854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.579 [2024-07-15 22:42:38.487066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.579 [2024-07-15 22:42:38.487083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.579 [2024-07-15 22:42:38.496343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.579 [2024-07-15 22:42:38.496555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.579 [2024-07-15 22:42:38.496572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.579 [2024-07-15 22:42:38.505798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.579 [2024-07-15 22:42:38.506008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.579 [2024-07-15 22:42:38.506025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.579 [2024-07-15 22:42:38.515270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.579 [2024-07-15 22:42:38.515486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.579 [2024-07-15 22:42:38.515503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.579 [2024-07-15 22:42:38.524738] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.579 [2024-07-15 22:42:38.524948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.579 [2024-07-15 22:42:38.524966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.579 [2024-07-15 22:42:38.534235] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.579 [2024-07-15 22:42:38.534451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.579 [2024-07-15 22:42:38.534469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.579 [2024-07-15 22:42:38.544023] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.579 [2024-07-15 22:42:38.544242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.579 [2024-07-15 22:42:38.544259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.553850] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.554066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.554084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.563491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.563698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.563716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.572974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.573187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.573205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.582780] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.582988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.583006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.592290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.592502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.592522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.601762] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.601974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.601992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.611236] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.611448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.611465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.620712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.620925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.620943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.630214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.630434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.630451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.639732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.639964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.639982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.649307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.649518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.649535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.658801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.659011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.659029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.668295] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.668510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.668529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.677778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.677988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.678005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.687274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.687514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.687532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 [2024-07-15 22:42:38.696768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d754d0) with pdu=0x2000190fc560 00:26:14.838 [2024-07-15 22:42:38.696979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.838 [2024-07-15 22:42:38.696997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.838 00:26:14.838 Latency(us) 00:26:14.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.838 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:14.838 nvme0n1 : 2.00 26794.03 104.66 0.00 0.00 4768.83 4103.12 10599.74 00:26:14.838 =================================================================================================================== 00:26:14.838 Total : 26794.03 104.66 0.00 0.00 4768.83 4103.12 10599.74 00:26:14.838 0 00:26:14.838 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:14.838 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:14.838 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:14.838 | .driver_specific 00:26:14.838 | .nvme_error 00:26:14.838 | .status_code 00:26:14.838 | .command_transient_transport_error' 00:26:14.838 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:15.097 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:26:15.097 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 149212 00:26:15.097 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 149212 ']' 00:26:15.097 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 149212 00:26:15.097 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:15.097 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:15.097 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149212 00:26:15.097 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:15.097 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:15.097 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149212' 00:26:15.097 killing process with pid 149212 00:26:15.097 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 149212 00:26:15.097 Received shutdown signal, test time was about 2.000000 seconds 00:26:15.097 00:26:15.097 Latency(us) 00:26:15.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.097 =================================================================================================================== 00:26:15.097 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:15.097 22:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 149212 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=149902 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 149902 /var/tmp/bperf.sock 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 149902 ']' 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:15.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:15.356 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:15.356 [2024-07-15 22:42:39.160816] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:26:15.356 [2024-07-15 22:42:39.160864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149902 ] 00:26:15.356 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:15.356 Zero copy mechanism will not be used. 00:26:15.356 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.356 [2024-07-15 22:42:39.215263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.356 [2024-07-15 22:42:39.294369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.292 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:16.292 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:16.292 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:16.292 22:42:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:16.292 22:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:16.292 22:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.292 22:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.292 22:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.292 22:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.292 22:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:16.550 nvme0n1 00:26:16.550 22:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:16.550 22:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.550 22:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:16.550 22:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.550 22:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:16.550 22:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:16.809 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:16.809 Zero copy mechanism will not be used. 00:26:16.809 Running I/O for 2 seconds... 00:26:16.809 [2024-07-15 22:42:40.596613] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.809 [2024-07-15 22:42:40.596999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.809 [2024-07-15 22:42:40.597032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.809 [2024-07-15 22:42:40.604937] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.809 [2024-07-15 22:42:40.605063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.809 [2024-07-15 22:42:40.605085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.809 [2024-07-15 22:42:40.612585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.809 [2024-07-15 22:42:40.612986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.809 [2024-07-15 22:42:40.613006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.809 [2024-07-15 22:42:40.618924] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.809 [2024-07-15 22:42:40.619285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.809 [2024-07-15 22:42:40.619304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.809 [2024-07-15 22:42:40.625143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.809 [2024-07-15 22:42:40.625494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.809 [2024-07-15 22:42:40.625513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.809 [2024-07-15 22:42:40.630981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.809 [2024-07-15 22:42:40.631342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.809 [2024-07-15 22:42:40.631362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.809 [2024-07-15 22:42:40.637073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.809 [2024-07-15 22:42:40.637449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.637468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.644715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.645079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.645097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.650652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.650990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.651009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.656751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.657139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.657157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.664019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.664496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.664515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.671913] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.672289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.672308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.677978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.678389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.678407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.683936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.684356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.684375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.690503] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.690850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.690868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.697502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.697905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.697923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.704303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.704642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.704661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.710425] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.710769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.710787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.717808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.718260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.718278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.730387] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.730776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.730794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.739071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.739463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.739482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.746267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.746665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.746684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.752962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.753361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.753380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.759513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.759885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.759904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.766221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.766604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.766622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.810 [2024-07-15 22:42:40.772718] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:16.810 [2024-07-15 22:42:40.773103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.810 [2024-07-15 22:42:40.773122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.779454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.779862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.779885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.786332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.786699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.786718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.792488] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.792835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.792853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.799035] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.799498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.799517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.807183] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.807630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.807650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.815396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.815827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.815845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.823987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.824467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.824486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.831778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.832221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.832245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.839935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.840393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.840412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.847951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.848483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.848510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.856922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.857359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.857378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.866000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.866496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.866514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.874370] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.874814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.874832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.882639] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.883114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.883132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.890703] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.891197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.891214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.898776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.899270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.899288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.906682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.907121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.907139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.914620] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.915066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.915084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.922182] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.922611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.922629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.929902] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.930352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.070 [2024-07-15 22:42:40.930370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.070 [2024-07-15 22:42:40.937702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.070 [2024-07-15 22:42:40.938182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:40.938200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:40.946214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:40.946672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:40.946690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:40.954684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:40.955091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:40.955110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:40.962108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:40.962514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:40.962533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:40.969489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:40.969939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:40.969957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:40.976052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:40.976409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:40.976428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:40.982553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:40.982960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:40.982978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:40.989542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:40.989987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:40.990007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:40.995866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:40.996204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:40.996223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:41.002428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:41.002777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:41.002795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:41.008758] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:41.009109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:41.009128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:41.014966] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:41.015320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:41.015339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:41.021709] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:41.022077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:41.022096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:41.027970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:41.028365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:41.028383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.071 [2024-07-15 22:42:41.034187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.071 [2024-07-15 22:42:41.034540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.071 [2024-07-15 22:42:41.034559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.331 [2024-07-15 22:42:41.039958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.331 [2024-07-15 22:42:41.040326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.331 [2024-07-15 22:42:41.040344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.331 [2024-07-15 22:42:41.046635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.331 [2024-07-15 22:42:41.047029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.331 [2024-07-15 22:42:41.047047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.331 [2024-07-15 22:42:41.052925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.331 [2024-07-15 22:42:41.053280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.331 [2024-07-15 22:42:41.053298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.331 [2024-07-15 22:42:41.058106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.331 [2024-07-15 22:42:41.058458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.331 [2024-07-15 22:42:41.058477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.331 [2024-07-15 22:42:41.063060] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.331 [2024-07-15 22:42:41.063421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.331 [2024-07-15 22:42:41.063440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.331 [2024-07-15 22:42:41.068123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.331 [2024-07-15 22:42:41.068493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.331 [2024-07-15 22:42:41.068512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.331 [2024-07-15 22:42:41.072823] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.073168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.073186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.077400] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.077732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.077750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.081983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.082339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.082362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.086505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.086853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.086872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.091046] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.091395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.091414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.095550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.095886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.095905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.100033] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.100404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.100422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.104664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.105015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.105033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.109248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.109612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.109631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.113842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.114185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.114203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.118403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.118754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.118773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.122960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.123317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.123336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.127376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.127701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.127720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.131811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.132137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.132156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.136285] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.136616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.136634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.140663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.140983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.141002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.145098] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.145424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.145443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.149634] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.149946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.149964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.154052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.154381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.154399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.158556] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.158881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.158899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.163055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.163385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.163403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.167521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.167838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.167857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.171921] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.172245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.172280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.176428] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.176756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.176775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.180951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.181278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.181296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.185376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.185701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.185719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.189782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.190106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.190125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.332 [2024-07-15 22:42:41.194280] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.332 [2024-07-15 22:42:41.194597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.332 [2024-07-15 22:42:41.194615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.198778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.199084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.199106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.203912] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.204238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.204256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.208529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.208850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.208868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.213558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.213869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.213887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.219273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.219613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.219631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.224437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.224763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.224781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.229800] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.230129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.230147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.235447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.235771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.235790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.240708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.241030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.241048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.245896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.246234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.246253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.251095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.251408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.251426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.256153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.256477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.256496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.261440] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.261768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.261787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.267276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.267589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.267608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.272618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.272945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.272963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.278353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.278683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.278702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.283647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.283972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.283990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.289125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.289470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.289489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.294444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.333 [2024-07-15 22:42:41.294785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.333 [2024-07-15 22:42:41.294804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.333 [2024-07-15 22:42:41.299344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.299675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.299696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.304552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.304884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.304903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.310611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.310955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.310973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.316544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.316872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.316889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.322550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.322881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.322899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.328169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.328539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.328557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.334210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.334545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.334563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.340463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.340783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.340806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.346437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.346763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.346780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.352148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.352492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.352511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.357057] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.357393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.357412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.362101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.362424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.362443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.366889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.367217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.367243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.371549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.371876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.371894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.376080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.376407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.376425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.380545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.380870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.380889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.385051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.385381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.385399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.390555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.390883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.390901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.395288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.395612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.395630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.400985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.401309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.401327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.406212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.406553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.406572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.412692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.413094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.413112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.419808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.420243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.420261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.427094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.427461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.427479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.433593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.433978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.434004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.441163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.665 [2024-07-15 22:42:41.441592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.665 [2024-07-15 22:42:41.441611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.665 [2024-07-15 22:42:41.449789] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.450194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.450212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.457297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.457707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.457726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.465188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.465679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.465698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.473287] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.473759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.473778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.480833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.481252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.481271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.488739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.489185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.489204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.496742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.497192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.497211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.504411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.504919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.504938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.512336] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.512729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.512747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.519991] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.520461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.520480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.528194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.528626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.528644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.536986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.537395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.537414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.545624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.546085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.546103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.554084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.554513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.554531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.562775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.563178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.563196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.571270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.571702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.571720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.579128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.579498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.579517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.586856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.587316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.587334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.595092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.595478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.595497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.603747] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.604159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.604178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.612170] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.612580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.612600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.620602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.621056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.621075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.666 [2024-07-15 22:42:41.629250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.666 [2024-07-15 22:42:41.629726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.666 [2024-07-15 22:42:41.629744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.925 [2024-07-15 22:42:41.637450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.925 [2024-07-15 22:42:41.637913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.925 [2024-07-15 22:42:41.637931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.925 [2024-07-15 22:42:41.646377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.646873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.646895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.655508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.655953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.655971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.664305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.664758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.664777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.672763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.673132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.673150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.680073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.680430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.680449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.687875] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.688234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.688253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.694095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.694400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.694419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.698737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.699042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.699061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.703199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.703518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.703537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.708034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.708353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.708372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.712419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.712723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.712741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.716582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.716863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.716881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.720489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.720766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.720785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.725130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.725407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.725425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.729073] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.729322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.729339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.732786] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.733040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.733058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.737034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.737302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.737320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.741460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.741720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.741739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.746326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.746610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.746628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.750388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.750639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.750658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.754257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.754500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.754518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.758074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.758336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.758356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.926 [2024-07-15 22:42:41.762004] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.926 [2024-07-15 22:42:41.762265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.926 [2024-07-15 22:42:41.762284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.766445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.766732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.766750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.772255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.772613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.772631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.778469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.778858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.778876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.785130] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.785468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.785491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.791993] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.792270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.792289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.799066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.799423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.799441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.807001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.807383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.807401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.814673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.815064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.815082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.822537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.822874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.822892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.830090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.830402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.830420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.838609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.839017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.839035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.846605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.846955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.846973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.854252] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.854684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.854703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.861962] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.862268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.862286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.869659] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.870018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.870037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.877483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.877841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.877859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.885091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.885487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.885505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.927 [2024-07-15 22:42:41.892713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:17.927 [2024-07-15 22:42:41.893065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:17.927 [2024-07-15 22:42:41.893084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.899532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.899842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.186 [2024-07-15 22:42:41.899860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.904910] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.905252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.186 [2024-07-15 22:42:41.905271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.910661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.910955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.186 [2024-07-15 22:42:41.910976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.916281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.916579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.186 [2024-07-15 22:42:41.916597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.921744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.922032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.186 [2024-07-15 22:42:41.922050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.929402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.929753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.186 [2024-07-15 22:42:41.929771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.935572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.935758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.186 [2024-07-15 22:42:41.935777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.943755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.944156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.186 [2024-07-15 22:42:41.944175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.951844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.952310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.186 [2024-07-15 22:42:41.952330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.959174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.959497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.186 [2024-07-15 22:42:41.959516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.967834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.968265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.186 [2024-07-15 22:42:41.968283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.977158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.977543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.186 [2024-07-15 22:42:41.977561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.186 [2024-07-15 22:42:41.984713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.186 [2024-07-15 22:42:41.985016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:41.985035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:41.989294] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:41.989588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:41.989607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:41.994368] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:41.994652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:41.994671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:41.998479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:41.998760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:41.998779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.003803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.004080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.004099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.008272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.008544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.008562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.012272] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.012551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.012569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.016328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.016610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.016628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.020334] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.020617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.020636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.024296] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.024578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.024597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.029500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.029822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.029841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.034666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.035004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.035022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.039474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.039870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.039888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.045805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.046276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.046295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.051791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.052115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.052133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.057579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.058116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.058134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.062936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.063237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.063259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.067208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.067473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.067492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.071845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.072221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.072246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.077555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.187 [2024-07-15 22:42:42.077889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.187 [2024-07-15 22:42:42.077907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.187 [2024-07-15 22:42:42.083110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.083409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.083428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.089669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.089990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.090008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.095442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.095758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.095777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.100071] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.100346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.100364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.104635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.104914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.104932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.108577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.108843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.108861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.112609] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.112867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.112885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.116531] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.116792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.116811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.120423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.120688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.120706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.124317] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.124578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.124596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.128240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.128498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.128517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.132502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.132805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.132824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.136643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.136892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.136911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.140459] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.140714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.140733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.144262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.144537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.144557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.148194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.148454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.148472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.188 [2024-07-15 22:42:42.152540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.188 [2024-07-15 22:42:42.152811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.188 [2024-07-15 22:42:42.152830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.446 [2024-07-15 22:42:42.156541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.446 [2024-07-15 22:42:42.156800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.446 [2024-07-15 22:42:42.156819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.446 [2024-07-15 22:42:42.160825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.446 [2024-07-15 22:42:42.161084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.161103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.164683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.164938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.164957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.168541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.168807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.168826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.172599] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.172860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.172879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.176683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.176942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.176964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.180555] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.180806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.180824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.184419] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.184685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.184704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.188222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.188498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.188516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.192084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.192379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.192399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.195999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.196271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.196290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.200635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.200898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.200917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.205890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.206147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.206165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.211288] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.211540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.211558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.216953] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.217222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.217246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.222839] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.223106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.223125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.228500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.228759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.228777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.233770] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.234027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.234046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.239421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.239905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.239923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.246066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.246337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.246355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.251543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.251822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.251840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.257024] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.257294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.257313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.262844] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.263105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.263124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.268462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.268732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.268751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.273898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.274157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.274174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.278727] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.278987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.279005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.284525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.284779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.284797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.290237] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.290501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.447 [2024-07-15 22:42:42.290518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.447 [2024-07-15 22:42:42.295559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.447 [2024-07-15 22:42:42.295830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.295848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.300929] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.301192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.301210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.306651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.306908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.306926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.312038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.312325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.312347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.316822] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.317081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.317100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.321137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.321389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.321407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.325061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.325327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.325347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.329001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.329261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.329279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.332843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.333101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.333119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.337124] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.337385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.337404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.342172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.342443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.342461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.347186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.347465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.347484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.351792] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.352051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.352071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.356391] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.356670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.356689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.360855] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.361116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.361135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.365271] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.365693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.365711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.370021] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.370285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.370304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.374301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.374562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.374580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.378169] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.378429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.378447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.382027] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.382294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.382313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.386627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.448 [2024-07-15 22:42:42.386891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.448 [2024-07-15 22:42:42.386913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.448 [2024-07-15 22:42:42.391050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.449 [2024-07-15 22:42:42.391317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.449 [2024-07-15 22:42:42.391335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.449 [2024-07-15 22:42:42.394879] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.449 [2024-07-15 22:42:42.395138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.449 [2024-07-15 22:42:42.395156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.449 [2024-07-15 22:42:42.398751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.449 [2024-07-15 22:42:42.399008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.449 [2024-07-15 22:42:42.399026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.449 [2024-07-15 22:42:42.402692] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.449 [2024-07-15 22:42:42.402944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.449 [2024-07-15 22:42:42.402962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.449 [2024-07-15 22:42:42.406766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.449 [2024-07-15 22:42:42.407018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.449 [2024-07-15 22:42:42.407037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.449 [2024-07-15 22:42:42.410637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.449 [2024-07-15 22:42:42.410882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.449 [2024-07-15 22:42:42.410901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.449 [2024-07-15 22:42:42.414510] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.449 [2024-07-15 22:42:42.414769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.449 [2024-07-15 22:42:42.414787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.419500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.419784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.419803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.423633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.423892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.423910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.427493] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.427741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.427759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.438517] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.438908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.438926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.445006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.445324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.445342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.451385] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.451688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.451706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.455918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.456178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.456196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.460365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.460628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.460647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.464507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.464772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.464790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.469140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.469399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.469417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.473415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.473736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.473754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.478881] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.479158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.479176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.484725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.707 [2024-07-15 22:42:42.485028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.707 [2024-07-15 22:42:42.485046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.707 [2024-07-15 22:42:42.489591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.489886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.489904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.494292] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.494605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.494622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.498776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.499071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.499090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.504034] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.504424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.504442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.510113] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.510575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.510594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.521630] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.522046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.522067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.530128] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.530503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.530521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.537310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.537651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.537669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.544463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.544759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.544777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.549155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.549431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.549449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.553095] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.553377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.553396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.557108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.557382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.557401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.561423] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.561677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.561695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.566975] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.567238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.567257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.572624] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.572886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.572904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.578045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.578320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.578338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:18.708 [2024-07-15 22:42:42.584536] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d75810) with pdu=0x2000190fef90 00:26:18.708 [2024-07-15 22:42:42.584832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:18.708 [2024-07-15 22:42:42.584850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:18.708 00:26:18.708 Latency(us) 00:26:18.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.708 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:18.708 nvme0n1 : 2.00 5276.87 659.61 0.00 0.00 3027.39 1787.99 11454.55 00:26:18.708 =================================================================================================================== 00:26:18.708 Total : 5276.87 659.61 0.00 0.00 3027.39 1787.99 11454.55 00:26:18.708 0 00:26:18.708 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:18.708 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:18.708 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:18.708 | .driver_specific 00:26:18.708 | .nvme_error 00:26:18.708 | .status_code 00:26:18.708 | .command_transient_transport_error' 00:26:18.708 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:18.966 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 340 > 0 )) 00:26:18.966 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 149902 00:26:18.966 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 149902 ']' 00:26:18.966 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 149902 00:26:18.966 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:18.966 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:18.966 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149902 00:26:18.966 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:18.966 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:18.966 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149902' 00:26:18.966 killing process with pid 149902 00:26:18.966 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 149902 00:26:18.966 Received shutdown signal, test time was about 2.000000 seconds 00:26:18.966 00:26:18.966 Latency(us) 00:26:18.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.966 =================================================================================================================== 00:26:18.966 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.966 22:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 149902 00:26:19.225 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 147784 00:26:19.225 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 147784 ']' 00:26:19.225 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 147784 00:26:19.225 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:19.225 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:19.225 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 147784 00:26:19.225 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:19.225 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:19.225 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 147784' 00:26:19.225 killing process with pid 147784 00:26:19.225 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 147784 00:26:19.225 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 147784 00:26:19.490 00:26:19.490 real 0m16.762s 00:26:19.490 user 0m32.229s 00:26:19.490 sys 0m4.260s 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:19.490 ************************************ 00:26:19.490 END TEST nvmf_digest_error 00:26:19.490 ************************************ 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:19.490 rmmod nvme_tcp 00:26:19.490 rmmod nvme_fabrics 00:26:19.490 rmmod nvme_keyring 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 147784 ']' 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 147784 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 147784 ']' 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 147784 00:26:19.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (147784) - No such process 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 147784 is not found' 00:26:19.490 Process with pid 147784 is not found 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.490 22:42:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.059 22:42:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:22.059 00:26:22.059 real 0m41.270s 00:26:22.059 user 1m6.105s 00:26:22.059 sys 0m12.706s 00:26:22.059 22:42:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:22.059 22:42:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:22.059 ************************************ 00:26:22.059 END TEST nvmf_digest 00:26:22.059 ************************************ 00:26:22.059 22:42:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:22.059 22:42:45 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:22.059 22:42:45 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:22.059 22:42:45 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:22.060 22:42:45 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:22.060 22:42:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:22.060 22:42:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.060 22:42:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:22.060 ************************************ 00:26:22.060 START TEST nvmf_bdevperf 00:26:22.060 ************************************ 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:22.060 * Looking for test storage... 00:26:22.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:22.060 22:42:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:27.336 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.336 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:27.337 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:27.337 Found net devices under 0000:86:00.0: cvl_0_0 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:27.337 Found net devices under 0000:86:00.1: cvl_0_1 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:27.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:26:27.337 00:26:27.337 --- 10.0.0.2 ping statistics --- 00:26:27.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.337 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:27.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:26:27.337 00:26:27.337 --- 10.0.0.1 ping statistics --- 00:26:27.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.337 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=153906 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 153906 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 153906 ']' 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:27.337 22:42:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.337 [2024-07-15 22:42:50.869907] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:26:27.337 [2024-07-15 22:42:50.869949] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.337 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.337 [2024-07-15 22:42:50.928016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:27.337 [2024-07-15 22:42:51.006396] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.337 [2024-07-15 22:42:51.006438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.337 [2024-07-15 22:42:51.006446] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.337 [2024-07-15 22:42:51.006452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.337 [2024-07-15 22:42:51.006457] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.337 [2024-07-15 22:42:51.006501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.337 [2024-07-15 22:42:51.006759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:27.337 [2024-07-15 22:42:51.006762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.904 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:27.904 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:27.904 22:42:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:27.904 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:27.904 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.904 22:42:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.904 22:42:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:27.904 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.904 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.905 [2024-07-15 22:42:51.723732] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.905 Malloc0 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:27.905 [2024-07-15 22:42:51.796432] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:27.905 { 00:26:27.905 "params": { 00:26:27.905 "name": "Nvme$subsystem", 00:26:27.905 "trtype": "$TEST_TRANSPORT", 00:26:27.905 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.905 "adrfam": "ipv4", 00:26:27.905 "trsvcid": "$NVMF_PORT", 00:26:27.905 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.905 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.905 "hdgst": ${hdgst:-false}, 00:26:27.905 "ddgst": ${ddgst:-false} 00:26:27.905 }, 00:26:27.905 "method": "bdev_nvme_attach_controller" 00:26:27.905 } 00:26:27.905 EOF 00:26:27.905 )") 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:27.905 22:42:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:27.905 "params": { 00:26:27.905 "name": "Nvme1", 00:26:27.905 "trtype": "tcp", 00:26:27.905 "traddr": "10.0.0.2", 00:26:27.905 "adrfam": "ipv4", 00:26:27.905 "trsvcid": "4420", 00:26:27.905 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.905 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.905 "hdgst": false, 00:26:27.905 "ddgst": false 00:26:27.905 }, 00:26:27.905 "method": "bdev_nvme_attach_controller" 00:26:27.905 }' 00:26:27.905 [2024-07-15 22:42:51.844249] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:26:27.905 [2024-07-15 22:42:51.844291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154155 ] 00:26:27.905 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.164 [2024-07-15 22:42:51.898050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.164 [2024-07-15 22:42:51.972026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.164 Running I/O for 1 seconds... 00:26:29.543 00:26:29.543 Latency(us) 00:26:29.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.543 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:29.543 Verification LBA range: start 0x0 length 0x4000 00:26:29.543 Nvme1n1 : 1.01 11406.64 44.56 0.00 0.00 11177.85 2308.01 14019.01 00:26:29.543 =================================================================================================================== 00:26:29.543 Total : 11406.64 44.56 0.00 0.00 11177.85 2308.01 14019.01 00:26:29.543 22:42:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=154387 00:26:29.543 22:42:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:29.543 22:42:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:29.543 22:42:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:29.543 22:42:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:29.543 22:42:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:29.543 22:42:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:29.543 22:42:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:29.543 { 00:26:29.543 "params": { 00:26:29.543 "name": "Nvme$subsystem", 00:26:29.543 "trtype": "$TEST_TRANSPORT", 00:26:29.543 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.543 "adrfam": "ipv4", 00:26:29.543 "trsvcid": "$NVMF_PORT", 00:26:29.543 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.543 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.543 "hdgst": ${hdgst:-false}, 00:26:29.543 "ddgst": ${ddgst:-false} 00:26:29.543 }, 00:26:29.543 "method": "bdev_nvme_attach_controller" 00:26:29.543 } 00:26:29.543 EOF 00:26:29.543 )") 00:26:29.543 22:42:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:29.543 22:42:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:29.543 22:42:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:29.543 22:42:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:29.543 "params": { 00:26:29.543 "name": "Nvme1", 00:26:29.543 "trtype": "tcp", 00:26:29.543 "traddr": "10.0.0.2", 00:26:29.543 "adrfam": "ipv4", 00:26:29.543 "trsvcid": "4420", 00:26:29.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:29.543 "hdgst": false, 00:26:29.543 "ddgst": false 00:26:29.543 }, 00:26:29.543 "method": "bdev_nvme_attach_controller" 00:26:29.543 }' 00:26:29.543 [2024-07-15 22:42:53.369039] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:26:29.543 [2024-07-15 22:42:53.369087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154387 ] 00:26:29.543 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.543 [2024-07-15 22:42:53.424228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.543 [2024-07-15 22:42:53.493952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.801 Running I/O for 15 seconds... 00:26:33.094 22:42:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 153906 00:26:33.094 22:42:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:33.094 [2024-07-15 22:42:56.346895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.094 [2024-07-15 22:42:56.346943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.094 [2024-07-15 22:42:56.346962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.094 [2024-07-15 22:42:56.346969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.094 [2024-07-15 22:42:56.346978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.094 [2024-07-15 22:42:56.346985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.094 [2024-07-15 22:42:56.346993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.094 [2024-07-15 22:42:56.347000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.094 [2024-07-15 22:42:56.347008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.094 [2024-07-15 22:42:56.347015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.094 [2024-07-15 22:42:56.347023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.095 [2024-07-15 22:42:56.347715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.095 [2024-07-15 22:42:56.347726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.347989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.347995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:108432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.096 [2024-07-15 22:42:56.348390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.096 [2024-07-15 22:42:56.348399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.097 [2024-07-15 22:42:56.348932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:33.097 [2024-07-15 22:42:56.348947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.348955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0bc70 is same with the state(5) to be set 00:26:33.097 [2024-07-15 22:42:56.348963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.097 [2024-07-15 22:42:56.348968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.097 [2024-07-15 22:42:56.348974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107912 len:8 PRP1 0x0 PRP2 0x0 00:26:33.097 [2024-07-15 22:42:56.348981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.097 [2024-07-15 22:42:56.349025] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb0bc70 was disconnected and freed. reset controller. 00:26:33.097 [2024-07-15 22:42:56.351857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.097 [2024-07-15 22:42:56.351910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.097 [2024-07-15 22:42:56.352605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.097 [2024-07-15 22:42:56.352621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.097 [2024-07-15 22:42:56.352628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.097 [2024-07-15 22:42:56.352807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.098 [2024-07-15 22:42:56.352985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.098 [2024-07-15 22:42:56.352993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.098 [2024-07-15 22:42:56.353000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.098 [2024-07-15 22:42:56.355779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.098 [2024-07-15 22:42:56.364995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.098 [2024-07-15 22:42:56.365529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.098 [2024-07-15 22:42:56.365575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.098 [2024-07-15 22:42:56.365597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.098 [2024-07-15 22:42:56.366179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.098 [2024-07-15 22:42:56.366457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.098 [2024-07-15 22:42:56.366466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.098 [2024-07-15 22:42:56.366472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.098 [2024-07-15 22:42:56.369219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.098 [2024-07-15 22:42:56.377927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.098 [2024-07-15 22:42:56.378410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.098 [2024-07-15 22:42:56.378457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.098 [2024-07-15 22:42:56.378479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.098 [2024-07-15 22:42:56.379069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.098 [2024-07-15 22:42:56.379666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.098 [2024-07-15 22:42:56.379675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.098 [2024-07-15 22:42:56.379682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.098 [2024-07-15 22:42:56.382390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.098 [2024-07-15 22:42:56.390953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.098 [2024-07-15 22:42:56.391458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.098 [2024-07-15 22:42:56.391503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.098 [2024-07-15 22:42:56.391525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.098 [2024-07-15 22:42:56.391740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.098 [2024-07-15 22:42:56.391903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.098 [2024-07-15 22:42:56.391910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.098 [2024-07-15 22:42:56.391917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.098 [2024-07-15 22:42:56.394614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.098 [2024-07-15 22:42:56.403800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.098 [2024-07-15 22:42:56.404305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.098 [2024-07-15 22:42:56.404349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.098 [2024-07-15 22:42:56.404371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.098 [2024-07-15 22:42:56.404756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.098 [2024-07-15 22:42:56.404930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.098 [2024-07-15 22:42:56.404937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.098 [2024-07-15 22:42:56.404944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.098 [2024-07-15 22:42:56.407626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.098 [2024-07-15 22:42:56.416649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.098 [2024-07-15 22:42:56.417124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.098 [2024-07-15 22:42:56.417169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.098 [2024-07-15 22:42:56.417190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.098 [2024-07-15 22:42:56.417811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.098 [2024-07-15 22:42:56.418030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.098 [2024-07-15 22:42:56.418038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.098 [2024-07-15 22:42:56.418048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.098 [2024-07-15 22:42:56.420728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.098 [2024-07-15 22:42:56.429636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.098 [2024-07-15 22:42:56.430108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.098 [2024-07-15 22:42:56.430124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.098 [2024-07-15 22:42:56.430131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.098 [2024-07-15 22:42:56.430308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.098 [2024-07-15 22:42:56.430485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.098 [2024-07-15 22:42:56.430493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.098 [2024-07-15 22:42:56.430500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.098 [2024-07-15 22:42:56.433252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.098 [2024-07-15 22:42:56.442655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.098 [2024-07-15 22:42:56.443078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.098 [2024-07-15 22:42:56.443095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.098 [2024-07-15 22:42:56.443102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.098 [2024-07-15 22:42:56.443279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.098 [2024-07-15 22:42:56.443453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.098 [2024-07-15 22:42:56.443461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.098 [2024-07-15 22:42:56.443468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.098 [2024-07-15 22:42:56.446215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.098 [2024-07-15 22:42:56.455597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.098 [2024-07-15 22:42:56.456016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.098 [2024-07-15 22:42:56.456031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.098 [2024-07-15 22:42:56.456038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.098 [2024-07-15 22:42:56.456210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.098 [2024-07-15 22:42:56.456389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.098 [2024-07-15 22:42:56.456397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.098 [2024-07-15 22:42:56.456403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.098 [2024-07-15 22:42:56.459126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.098 [2024-07-15 22:42:56.468568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.098 [2024-07-15 22:42:56.469006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.098 [2024-07-15 22:42:56.469055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.098 [2024-07-15 22:42:56.469077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.098 [2024-07-15 22:42:56.469643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.098 [2024-07-15 22:42:56.469815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.098 [2024-07-15 22:42:56.469823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.098 [2024-07-15 22:42:56.469829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.098 [2024-07-15 22:42:56.472557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.098 [2024-07-15 22:42:56.481371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.098 [2024-07-15 22:42:56.481765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.098 [2024-07-15 22:42:56.481807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.098 [2024-07-15 22:42:56.481828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.098 [2024-07-15 22:42:56.482422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.098 [2024-07-15 22:42:56.482997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.098 [2024-07-15 22:42:56.483009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.098 [2024-07-15 22:42:56.483018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.098 [2024-07-15 22:42:56.487074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.099 [2024-07-15 22:42:56.494850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.099 [2024-07-15 22:42:56.495326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.099 [2024-07-15 22:42:56.495342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.099 [2024-07-15 22:42:56.495349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.099 [2024-07-15 22:42:56.495520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.099 [2024-07-15 22:42:56.495692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.099 [2024-07-15 22:42:56.495700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.099 [2024-07-15 22:42:56.495706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.099 [2024-07-15 22:42:56.498508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.099 [2024-07-15 22:42:56.507775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.099 [2024-07-15 22:42:56.508113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.099 [2024-07-15 22:42:56.508129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.099 [2024-07-15 22:42:56.508135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.099 [2024-07-15 22:42:56.508321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.099 [2024-07-15 22:42:56.508497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.099 [2024-07-15 22:42:56.508505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.099 [2024-07-15 22:42:56.508511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.099 [2024-07-15 22:42:56.511189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.099 [2024-07-15 22:42:56.520696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.099 [2024-07-15 22:42:56.521202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.099 [2024-07-15 22:42:56.521257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.099 [2024-07-15 22:42:56.521280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.099 [2024-07-15 22:42:56.521826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.099 [2024-07-15 22:42:56.521998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.099 [2024-07-15 22:42:56.522006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.099 [2024-07-15 22:42:56.522012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.099 [2024-07-15 22:42:56.524701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.099 [2024-07-15 22:42:56.533631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.099 [2024-07-15 22:42:56.534131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.099 [2024-07-15 22:42:56.534172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.099 [2024-07-15 22:42:56.534193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.099 [2024-07-15 22:42:56.534789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.099 [2024-07-15 22:42:56.535008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.099 [2024-07-15 22:42:56.535016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.099 [2024-07-15 22:42:56.535022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.099 [2024-07-15 22:42:56.537703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.099 [2024-07-15 22:42:56.546554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.099 [2024-07-15 22:42:56.546971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.099 [2024-07-15 22:42:56.546986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.099 [2024-07-15 22:42:56.546993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.099 [2024-07-15 22:42:56.547164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.099 [2024-07-15 22:42:56.547341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.099 [2024-07-15 22:42:56.547350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.099 [2024-07-15 22:42:56.547357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.099 [2024-07-15 22:42:56.550038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.099 [2024-07-15 22:42:56.559443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.099 [2024-07-15 22:42:56.559921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.099 [2024-07-15 22:42:56.559936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.099 [2024-07-15 22:42:56.559942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.099 [2024-07-15 22:42:56.560104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.099 [2024-07-15 22:42:56.560288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.099 [2024-07-15 22:42:56.560296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.099 [2024-07-15 22:42:56.560302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.099 [2024-07-15 22:42:56.562981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.099 [2024-07-15 22:42:56.572369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.099 [2024-07-15 22:42:56.572765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.099 [2024-07-15 22:42:56.572781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.099 [2024-07-15 22:42:56.572787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.099 [2024-07-15 22:42:56.572949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.099 [2024-07-15 22:42:56.573111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.099 [2024-07-15 22:42:56.573118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.099 [2024-07-15 22:42:56.573123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.099 [2024-07-15 22:42:56.575823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.099 [2024-07-15 22:42:56.585210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.099 [2024-07-15 22:42:56.585688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.099 [2024-07-15 22:42:56.585731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.099 [2024-07-15 22:42:56.585752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.099 [2024-07-15 22:42:56.586329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.099 [2024-07-15 22:42:56.586502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.099 [2024-07-15 22:42:56.586510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.099 [2024-07-15 22:42:56.586516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.099 [2024-07-15 22:42:56.589192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.099 [2024-07-15 22:42:56.598079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.099 [2024-07-15 22:42:56.598566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.099 [2024-07-15 22:42:56.598582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.099 [2024-07-15 22:42:56.598592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.099 [2024-07-15 22:42:56.598769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.099 [2024-07-15 22:42:56.598946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.099 [2024-07-15 22:42:56.598955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.099 [2024-07-15 22:42:56.598961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.099 [2024-07-15 22:42:56.601797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.100 [2024-07-15 22:42:56.611170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.100 [2024-07-15 22:42:56.611644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.100 [2024-07-15 22:42:56.611661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.100 [2024-07-15 22:42:56.611667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.100 [2024-07-15 22:42:56.611843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.100 [2024-07-15 22:42:56.612021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.100 [2024-07-15 22:42:56.612029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.100 [2024-07-15 22:42:56.612035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.100 [2024-07-15 22:42:56.614870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.100 [2024-07-15 22:42:56.624185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.100 [2024-07-15 22:42:56.624701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.100 [2024-07-15 22:42:56.624745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.100 [2024-07-15 22:42:56.624766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.100 [2024-07-15 22:42:56.625295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.100 [2024-07-15 22:42:56.625467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.100 [2024-07-15 22:42:56.625475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.100 [2024-07-15 22:42:56.625480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.100 [2024-07-15 22:42:56.628228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.100 [2024-07-15 22:42:56.637130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.100 [2024-07-15 22:42:56.637519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.100 [2024-07-15 22:42:56.637535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.100 [2024-07-15 22:42:56.637541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.100 [2024-07-15 22:42:56.637713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.100 [2024-07-15 22:42:56.637885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.100 [2024-07-15 22:42:56.637896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.100 [2024-07-15 22:42:56.637902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.100 [2024-07-15 22:42:56.640601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.100 [2024-07-15 22:42:56.649990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.100 [2024-07-15 22:42:56.650367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.100 [2024-07-15 22:42:56.650383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.100 [2024-07-15 22:42:56.650390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.100 [2024-07-15 22:42:56.650562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.100 [2024-07-15 22:42:56.650734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.100 [2024-07-15 22:42:56.650741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.100 [2024-07-15 22:42:56.650747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.100 [2024-07-15 22:42:56.653429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.100 [2024-07-15 22:42:56.662804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.100 [2024-07-15 22:42:56.663252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.100 [2024-07-15 22:42:56.663267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.100 [2024-07-15 22:42:56.663273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.100 [2024-07-15 22:42:56.663436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.100 [2024-07-15 22:42:56.663598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.100 [2024-07-15 22:42:56.663605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.100 [2024-07-15 22:42:56.663611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.100 [2024-07-15 22:42:56.666349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.100 [2024-07-15 22:42:56.675741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.100 [2024-07-15 22:42:56.676235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.100 [2024-07-15 22:42:56.676250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.100 [2024-07-15 22:42:56.676257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.100 [2024-07-15 22:42:56.676429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.100 [2024-07-15 22:42:56.676601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.100 [2024-07-15 22:42:56.676609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.100 [2024-07-15 22:42:56.676614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.100 [2024-07-15 22:42:56.679331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.100 [2024-07-15 22:42:56.688549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.100 [2024-07-15 22:42:56.689016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.100 [2024-07-15 22:42:56.689056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.100 [2024-07-15 22:42:56.689077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.100 [2024-07-15 22:42:56.689600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.100 [2024-07-15 22:42:56.689773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.100 [2024-07-15 22:42:56.689780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.100 [2024-07-15 22:42:56.689787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.100 [2024-07-15 22:42:56.692516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.100 [2024-07-15 22:42:56.701380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.100 [2024-07-15 22:42:56.701861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.100 [2024-07-15 22:42:56.701903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.100 [2024-07-15 22:42:56.701923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.100 [2024-07-15 22:42:56.702461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.100 [2024-07-15 22:42:56.702633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.100 [2024-07-15 22:42:56.702641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.100 [2024-07-15 22:42:56.702648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.100 [2024-07-15 22:42:56.705330] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.100 [2024-07-15 22:42:56.714204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.100 [2024-07-15 22:42:56.714686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.100 [2024-07-15 22:42:56.714728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.100 [2024-07-15 22:42:56.714749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.100 [2024-07-15 22:42:56.715327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.100 [2024-07-15 22:42:56.715500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.100 [2024-07-15 22:42:56.715508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.100 [2024-07-15 22:42:56.715514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.100 [2024-07-15 22:42:56.718190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.100 [2024-07-15 22:42:56.727073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.100 [2024-07-15 22:42:56.727561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.100 [2024-07-15 22:42:56.727603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.100 [2024-07-15 22:42:56.727624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.100 [2024-07-15 22:42:56.728210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.100 [2024-07-15 22:42:56.728510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.100 [2024-07-15 22:42:56.728518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.100 [2024-07-15 22:42:56.728524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.100 [2024-07-15 22:42:56.731267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.100 [2024-07-15 22:42:56.739871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.100 [2024-07-15 22:42:56.740360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.100 [2024-07-15 22:42:56.740375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.100 [2024-07-15 22:42:56.740382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.100 [2024-07-15 22:42:56.740554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.100 [2024-07-15 22:42:56.740726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.100 [2024-07-15 22:42:56.740733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.100 [2024-07-15 22:42:56.740739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.100 [2024-07-15 22:42:56.743446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.101 [2024-07-15 22:42:56.752787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.101 [2024-07-15 22:42:56.753264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.101 [2024-07-15 22:42:56.753306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.101 [2024-07-15 22:42:56.753327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.101 [2024-07-15 22:42:56.753729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.101 [2024-07-15 22:42:56.753892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.101 [2024-07-15 22:42:56.753899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.101 [2024-07-15 22:42:56.753905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.101 [2024-07-15 22:42:56.756601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.101 [2024-07-15 22:42:56.765631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.101 [2024-07-15 22:42:56.766100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.101 [2024-07-15 22:42:56.766115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.101 [2024-07-15 22:42:56.766121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.101 [2024-07-15 22:42:56.766307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.101 [2024-07-15 22:42:56.766479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.101 [2024-07-15 22:42:56.766487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.101 [2024-07-15 22:42:56.766496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.101 [2024-07-15 22:42:56.769178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.101 [2024-07-15 22:42:56.778424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.101 [2024-07-15 22:42:56.778888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.101 [2024-07-15 22:42:56.778903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.101 [2024-07-15 22:42:56.778909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.101 [2024-07-15 22:42:56.779071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.101 [2024-07-15 22:42:56.779240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.101 [2024-07-15 22:42:56.779248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.101 [2024-07-15 22:42:56.779271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.101 [2024-07-15 22:42:56.781950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.101 [2024-07-15 22:42:56.791358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.101 [2024-07-15 22:42:56.791809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.101 [2024-07-15 22:42:56.791850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.101 [2024-07-15 22:42:56.791871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.101 [2024-07-15 22:42:56.792419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.101 [2024-07-15 22:42:56.792591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.101 [2024-07-15 22:42:56.792599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.101 [2024-07-15 22:42:56.792605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.101 [2024-07-15 22:42:56.795283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.101 [2024-07-15 22:42:56.804289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.101 [2024-07-15 22:42:56.804773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.101 [2024-07-15 22:42:56.804815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.101 [2024-07-15 22:42:56.804836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.101 [2024-07-15 22:42:56.805431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.101 [2024-07-15 22:42:56.805936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.101 [2024-07-15 22:42:56.805943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.101 [2024-07-15 22:42:56.805949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.101 [2024-07-15 22:42:56.809784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.101 [2024-07-15 22:42:56.817834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.101 [2024-07-15 22:42:56.818318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.101 [2024-07-15 22:42:56.818368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.101 [2024-07-15 22:42:56.818389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.101 [2024-07-15 22:42:56.818872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.101 [2024-07-15 22:42:56.819039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.101 [2024-07-15 22:42:56.819047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.101 [2024-07-15 22:42:56.819053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.101 [2024-07-15 22:42:56.821786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.101 [2024-07-15 22:42:56.830643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.101 [2024-07-15 22:42:56.831108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.101 [2024-07-15 22:42:56.831144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.101 [2024-07-15 22:42:56.831166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.101 [2024-07-15 22:42:56.831724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.101 [2024-07-15 22:42:56.831897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.101 [2024-07-15 22:42:56.831905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.101 [2024-07-15 22:42:56.831910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.101 [2024-07-15 22:42:56.834623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.101 [2024-07-15 22:42:56.843505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.101 [2024-07-15 22:42:56.844005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.101 [2024-07-15 22:42:56.844046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.101 [2024-07-15 22:42:56.844067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.101 [2024-07-15 22:42:56.844513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.101 [2024-07-15 22:42:56.844686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.101 [2024-07-15 22:42:56.844693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.101 [2024-07-15 22:42:56.844700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.101 [2024-07-15 22:42:56.847380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.101 [2024-07-15 22:42:56.856368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.101 [2024-07-15 22:42:56.856910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.101 [2024-07-15 22:42:56.856952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.101 [2024-07-15 22:42:56.856973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.101 [2024-07-15 22:42:56.857534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.101 [2024-07-15 22:42:56.857718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.101 [2024-07-15 22:42:56.857726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.101 [2024-07-15 22:42:56.857732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.101 [2024-07-15 22:42:56.860567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.101 [2024-07-15 22:42:56.869363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.101 [2024-07-15 22:42:56.869843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.101 [2024-07-15 22:42:56.869860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.101 [2024-07-15 22:42:56.869867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.101 [2024-07-15 22:42:56.870044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.101 [2024-07-15 22:42:56.870221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.101 [2024-07-15 22:42:56.870235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.101 [2024-07-15 22:42:56.870241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.101 [2024-07-15 22:42:56.873003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.101 [2024-07-15 22:42:56.882186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.101 [2024-07-15 22:42:56.882692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.101 [2024-07-15 22:42:56.882736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.101 [2024-07-15 22:42:56.882757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.101 [2024-07-15 22:42:56.883350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.101 [2024-07-15 22:42:56.883909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.101 [2024-07-15 22:42:56.883918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.101 [2024-07-15 22:42:56.883924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.101 [2024-07-15 22:42:56.886605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.102 [2024-07-15 22:42:56.894996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.102 [2024-07-15 22:42:56.895504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.102 [2024-07-15 22:42:56.895545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.102 [2024-07-15 22:42:56.895565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.102 [2024-07-15 22:42:56.896060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.102 [2024-07-15 22:42:56.896223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.102 [2024-07-15 22:42:56.896237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.102 [2024-07-15 22:42:56.896242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.102 [2024-07-15 22:42:56.898940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.102 [2024-07-15 22:42:56.907830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.102 [2024-07-15 22:42:56.908310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.102 [2024-07-15 22:42:56.908352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.102 [2024-07-15 22:42:56.908374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.102 [2024-07-15 22:42:56.908881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.102 [2024-07-15 22:42:56.909053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.102 [2024-07-15 22:42:56.909061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.102 [2024-07-15 22:42:56.909067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.102 [2024-07-15 22:42:56.911750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.102 [2024-07-15 22:42:56.920638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.102 [2024-07-15 22:42:56.921049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.102 [2024-07-15 22:42:56.921064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.102 [2024-07-15 22:42:56.921070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.102 [2024-07-15 22:42:56.921238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.102 [2024-07-15 22:42:56.921427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.102 [2024-07-15 22:42:56.921435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.102 [2024-07-15 22:42:56.921441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.102 [2024-07-15 22:42:56.924119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.102 [2024-07-15 22:42:56.933486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.102 [2024-07-15 22:42:56.933926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.102 [2024-07-15 22:42:56.933942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.102 [2024-07-15 22:42:56.933948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.102 [2024-07-15 22:42:56.934111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.102 [2024-07-15 22:42:56.934297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.102 [2024-07-15 22:42:56.934305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.102 [2024-07-15 22:42:56.934312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.102 [2024-07-15 22:42:56.936992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.102 [2024-07-15 22:42:56.946411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.102 [2024-07-15 22:42:56.946877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.102 [2024-07-15 22:42:56.946892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.102 [2024-07-15 22:42:56.946901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.102 [2024-07-15 22:42:56.947063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.102 [2024-07-15 22:42:56.947231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.102 [2024-07-15 22:42:56.947239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.102 [2024-07-15 22:42:56.947245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.102 [2024-07-15 22:42:56.949969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.102 [2024-07-15 22:42:56.959198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.102 [2024-07-15 22:42:56.959615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.102 [2024-07-15 22:42:56.959632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.102 [2024-07-15 22:42:56.959638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.102 [2024-07-15 22:42:56.959801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.102 [2024-07-15 22:42:56.959965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.102 [2024-07-15 22:42:56.959972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.102 [2024-07-15 22:42:56.959978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.102 [2024-07-15 22:42:56.962672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.102 [2024-07-15 22:42:56.972091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.102 [2024-07-15 22:42:56.972597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.102 [2024-07-15 22:42:56.972640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.102 [2024-07-15 22:42:56.972662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.102 [2024-07-15 22:42:56.973089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.102 [2024-07-15 22:42:56.973265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.102 [2024-07-15 22:42:56.973273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.102 [2024-07-15 22:42:56.973279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.102 [2024-07-15 22:42:56.975963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.102 [2024-07-15 22:42:56.985016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.102 [2024-07-15 22:42:56.985457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.102 [2024-07-15 22:42:56.985473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.102 [2024-07-15 22:42:56.985479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.102 [2024-07-15 22:42:56.985641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.102 [2024-07-15 22:42:56.985803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.102 [2024-07-15 22:42:56.985813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.102 [2024-07-15 22:42:56.985819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.102 [2024-07-15 22:42:56.988514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.102 [2024-07-15 22:42:56.997937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.102 [2024-07-15 22:42:56.998399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.102 [2024-07-15 22:42:56.998416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.102 [2024-07-15 22:42:56.998422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.102 [2024-07-15 22:42:56.998594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.102 [2024-07-15 22:42:56.998765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.102 [2024-07-15 22:42:56.998773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.102 [2024-07-15 22:42:56.998779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.102 [2024-07-15 22:42:57.001479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.102 [2024-07-15 22:42:57.010814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.102 [2024-07-15 22:42:57.011219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.102 [2024-07-15 22:42:57.011240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.102 [2024-07-15 22:42:57.011247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.102 [2024-07-15 22:42:57.011435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.102 [2024-07-15 22:42:57.011608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.102 [2024-07-15 22:42:57.011617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.102 [2024-07-15 22:42:57.011623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.102 [2024-07-15 22:42:57.014339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.102 [2024-07-15 22:42:57.023734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.102 [2024-07-15 22:42:57.024213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.102 [2024-07-15 22:42:57.024269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.102 [2024-07-15 22:42:57.024291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.102 [2024-07-15 22:42:57.024870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.102 [2024-07-15 22:42:57.025327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.102 [2024-07-15 22:42:57.025336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.102 [2024-07-15 22:42:57.025342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.102 [2024-07-15 22:42:57.028018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.103 [2024-07-15 22:42:57.036528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.103 [2024-07-15 22:42:57.036980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.103 [2024-07-15 22:42:57.037023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.103 [2024-07-15 22:42:57.037044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.103 [2024-07-15 22:42:57.037635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.103 [2024-07-15 22:42:57.038217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.103 [2024-07-15 22:42:57.038258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.103 [2024-07-15 22:42:57.038267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.103 [2024-07-15 22:42:57.042324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.103 [2024-07-15 22:42:57.050037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.103 [2024-07-15 22:42:57.050434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.103 [2024-07-15 22:42:57.050450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.103 [2024-07-15 22:42:57.050456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.103 [2024-07-15 22:42:57.050628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.103 [2024-07-15 22:42:57.050800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.103 [2024-07-15 22:42:57.050808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.103 [2024-07-15 22:42:57.050814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.103 [2024-07-15 22:42:57.053625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.364 [2024-07-15 22:42:57.063000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.364 [2024-07-15 22:42:57.063452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-07-15 22:42:57.063468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.364 [2024-07-15 22:42:57.063474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.364 [2024-07-15 22:42:57.063636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.364 [2024-07-15 22:42:57.063800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.364 [2024-07-15 22:42:57.063807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.364 [2024-07-15 22:42:57.063813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.364 [2024-07-15 22:42:57.066572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.364 [2024-07-15 22:42:57.075953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.364 [2024-07-15 22:42:57.076419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-07-15 22:42:57.076462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.364 [2024-07-15 22:42:57.076483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.364 [2024-07-15 22:42:57.077068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.364 [2024-07-15 22:42:57.077504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.364 [2024-07-15 22:42:57.077512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.364 [2024-07-15 22:42:57.077518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.364 [2024-07-15 22:42:57.080195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.364 [2024-07-15 22:42:57.088772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.364 [2024-07-15 22:42:57.089248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-07-15 22:42:57.089290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.364 [2024-07-15 22:42:57.089312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.364 [2024-07-15 22:42:57.089891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.364 [2024-07-15 22:42:57.090367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.364 [2024-07-15 22:42:57.090375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.364 [2024-07-15 22:42:57.090381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.364 [2024-07-15 22:42:57.093056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.364 [2024-07-15 22:42:57.101631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.364 [2024-07-15 22:42:57.102115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-07-15 22:42:57.102157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.364 [2024-07-15 22:42:57.102178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.364 [2024-07-15 22:42:57.102772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.364 [2024-07-15 22:42:57.103368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.364 [2024-07-15 22:42:57.103393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.364 [2024-07-15 22:42:57.103421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.364 [2024-07-15 22:42:57.106097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.364 [2024-07-15 22:42:57.114462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.364 [2024-07-15 22:42:57.114972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-07-15 22:42:57.115016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.364 [2024-07-15 22:42:57.115037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.364 [2024-07-15 22:42:57.115555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.364 [2024-07-15 22:42:57.115728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.364 [2024-07-15 22:42:57.115737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.364 [2024-07-15 22:42:57.115747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.364 [2024-07-15 22:42:57.118601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.364 [2024-07-15 22:42:57.127605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.364 [2024-07-15 22:42:57.128102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-07-15 22:42:57.128118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.364 [2024-07-15 22:42:57.128124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.364 [2024-07-15 22:42:57.128306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.364 [2024-07-15 22:42:57.128493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.364 [2024-07-15 22:42:57.128500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.364 [2024-07-15 22:42:57.128506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.364 [2024-07-15 22:42:57.131261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.364 [2024-07-15 22:42:57.140714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.364 [2024-07-15 22:42:57.141200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-07-15 22:42:57.141254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.364 [2024-07-15 22:42:57.141277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.364 [2024-07-15 22:42:57.141624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.364 [2024-07-15 22:42:57.141799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.364 [2024-07-15 22:42:57.141808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.364 [2024-07-15 22:42:57.141814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.364 [2024-07-15 22:42:57.144599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.364 [2024-07-15 22:42:57.153666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.364 [2024-07-15 22:42:57.154132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.364 [2024-07-15 22:42:57.154148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.364 [2024-07-15 22:42:57.154155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.364 [2024-07-15 22:42:57.154331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.364 [2024-07-15 22:42:57.154504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.364 [2024-07-15 22:42:57.154512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.364 [2024-07-15 22:42:57.154518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.365 [2024-07-15 22:42:57.157236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.365 [2024-07-15 22:42:57.166768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.365 [2024-07-15 22:42:57.167236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-07-15 22:42:57.167254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.365 [2024-07-15 22:42:57.167261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.365 [2024-07-15 22:42:57.167438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.365 [2024-07-15 22:42:57.167614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.365 [2024-07-15 22:42:57.167622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.365 [2024-07-15 22:42:57.167628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.365 [2024-07-15 22:42:57.170474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.365 [2024-07-15 22:42:57.179800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.365 [2024-07-15 22:42:57.180279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-07-15 22:42:57.180295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.365 [2024-07-15 22:42:57.180302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.365 [2024-07-15 22:42:57.180473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.365 [2024-07-15 22:42:57.180646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.365 [2024-07-15 22:42:57.180653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.365 [2024-07-15 22:42:57.180659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.365 [2024-07-15 22:42:57.183346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.365 [2024-07-15 22:42:57.192728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.365 [2024-07-15 22:42:57.193244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-07-15 22:42:57.193286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.365 [2024-07-15 22:42:57.193308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.365 [2024-07-15 22:42:57.193804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.365 [2024-07-15 22:42:57.193975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.365 [2024-07-15 22:42:57.193983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.365 [2024-07-15 22:42:57.193989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.365 [2024-07-15 22:42:57.196670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.365 [2024-07-15 22:42:57.205708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.365 [2024-07-15 22:42:57.206141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-07-15 22:42:57.206182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.365 [2024-07-15 22:42:57.206203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.365 [2024-07-15 22:42:57.206719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.365 [2024-07-15 22:42:57.206896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.365 [2024-07-15 22:42:57.206904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.365 [2024-07-15 22:42:57.206910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.365 [2024-07-15 22:42:57.209636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.365 [2024-07-15 22:42:57.218652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.365 [2024-07-15 22:42:57.219136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-07-15 22:42:57.219153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.365 [2024-07-15 22:42:57.219159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.365 [2024-07-15 22:42:57.219336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.365 [2024-07-15 22:42:57.219510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.365 [2024-07-15 22:42:57.219518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.365 [2024-07-15 22:42:57.219523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.365 [2024-07-15 22:42:57.222207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.365 [2024-07-15 22:42:57.231575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.365 [2024-07-15 22:42:57.232090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-07-15 22:42:57.232106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.365 [2024-07-15 22:42:57.232112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.365 [2024-07-15 22:42:57.232290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.365 [2024-07-15 22:42:57.232462] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.365 [2024-07-15 22:42:57.232470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.365 [2024-07-15 22:42:57.232476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.365 [2024-07-15 22:42:57.235204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.365 [2024-07-15 22:42:57.244527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.365 [2024-07-15 22:42:57.244959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-07-15 22:42:57.244975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.365 [2024-07-15 22:42:57.244981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.365 [2024-07-15 22:42:57.245153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.365 [2024-07-15 22:42:57.245330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.365 [2024-07-15 22:42:57.245339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.365 [2024-07-15 22:42:57.245345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.365 [2024-07-15 22:42:57.248032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.365 [2024-07-15 22:42:57.257513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.365 [2024-07-15 22:42:57.257912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-07-15 22:42:57.257928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.365 [2024-07-15 22:42:57.257935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.365 [2024-07-15 22:42:57.258106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.365 [2024-07-15 22:42:57.258285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.365 [2024-07-15 22:42:57.258294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.365 [2024-07-15 22:42:57.258300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.365 [2024-07-15 22:42:57.261039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.365 [2024-07-15 22:42:57.270440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.365 [2024-07-15 22:42:57.270795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-07-15 22:42:57.270811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.365 [2024-07-15 22:42:57.270818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.365 [2024-07-15 22:42:57.270989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.365 [2024-07-15 22:42:57.271161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.365 [2024-07-15 22:42:57.271169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.365 [2024-07-15 22:42:57.271175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.365 [2024-07-15 22:42:57.273918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.365 [2024-07-15 22:42:57.283434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.365 [2024-07-15 22:42:57.283838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-07-15 22:42:57.283880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.365 [2024-07-15 22:42:57.283900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.365 [2024-07-15 22:42:57.284491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.365 [2024-07-15 22:42:57.284986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.365 [2024-07-15 22:42:57.284994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.365 [2024-07-15 22:42:57.285000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.365 [2024-07-15 22:42:57.287689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.365 [2024-07-15 22:42:57.296270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.365 [2024-07-15 22:42:57.296604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.365 [2024-07-15 22:42:57.296620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.365 [2024-07-15 22:42:57.296630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.365 [2024-07-15 22:42:57.296801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.365 [2024-07-15 22:42:57.296973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.365 [2024-07-15 22:42:57.296982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.365 [2024-07-15 22:42:57.296987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.366 [2024-07-15 22:42:57.299686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.366 [2024-07-15 22:42:57.309299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.366 [2024-07-15 22:42:57.309762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-07-15 22:42:57.309778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.366 [2024-07-15 22:42:57.309784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.366 [2024-07-15 22:42:57.309955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.366 [2024-07-15 22:42:57.310128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.366 [2024-07-15 22:42:57.310135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.366 [2024-07-15 22:42:57.310141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.366 [2024-07-15 22:42:57.312928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.366 [2024-07-15 22:42:57.322360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.366 [2024-07-15 22:42:57.322767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.366 [2024-07-15 22:42:57.322783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.366 [2024-07-15 22:42:57.322790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.366 [2024-07-15 22:42:57.322961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.366 [2024-07-15 22:42:57.323133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.366 [2024-07-15 22:42:57.323141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.366 [2024-07-15 22:42:57.323147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.366 [2024-07-15 22:42:57.325890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.626 [2024-07-15 22:42:57.335437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.626 [2024-07-15 22:42:57.335865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.626 [2024-07-15 22:42:57.335882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.626 [2024-07-15 22:42:57.335889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.626 [2024-07-15 22:42:57.336061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.626 [2024-07-15 22:42:57.336239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.626 [2024-07-15 22:42:57.336250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.626 [2024-07-15 22:42:57.336257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.626 [2024-07-15 22:42:57.339076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.626 [2024-07-15 22:42:57.348329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.626 [2024-07-15 22:42:57.348731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.626 [2024-07-15 22:42:57.348746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.626 [2024-07-15 22:42:57.348752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.626 [2024-07-15 22:42:57.348924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.626 [2024-07-15 22:42:57.349095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.626 [2024-07-15 22:42:57.349103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.626 [2024-07-15 22:42:57.349109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.626 [2024-07-15 22:42:57.351858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.626 [2024-07-15 22:42:57.361327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.626 [2024-07-15 22:42:57.361803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.626 [2024-07-15 22:42:57.361818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.626 [2024-07-15 22:42:57.361825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.626 [2024-07-15 22:42:57.361996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.626 [2024-07-15 22:42:57.362168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.626 [2024-07-15 22:42:57.362176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.626 [2024-07-15 22:42:57.362182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.626 [2024-07-15 22:42:57.364921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.626 [2024-07-15 22:42:57.374491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.626 [2024-07-15 22:42:57.374935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.626 [2024-07-15 22:42:57.374951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.626 [2024-07-15 22:42:57.374958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.626 [2024-07-15 22:42:57.375134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.626 [2024-07-15 22:42:57.375318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.626 [2024-07-15 22:42:57.375326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.626 [2024-07-15 22:42:57.375333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.626 [2024-07-15 22:42:57.378159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.626 [2024-07-15 22:42:57.387527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.626 [2024-07-15 22:42:57.387861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.626 [2024-07-15 22:42:57.387877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.626 [2024-07-15 22:42:57.387883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.626 [2024-07-15 22:42:57.388054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.626 [2024-07-15 22:42:57.388233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.626 [2024-07-15 22:42:57.388241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.627 [2024-07-15 22:42:57.388247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.627 [2024-07-15 22:42:57.391057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.627 [2024-07-15 22:42:57.400513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.627 [2024-07-15 22:42:57.401002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.627 [2024-07-15 22:42:57.401017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.627 [2024-07-15 22:42:57.401024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.627 [2024-07-15 22:42:57.401200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.627 [2024-07-15 22:42:57.401386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.627 [2024-07-15 22:42:57.401394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.627 [2024-07-15 22:42:57.401400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.627 [2024-07-15 22:42:57.404117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.627 [2024-07-15 22:42:57.413475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.627 [2024-07-15 22:42:57.413894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.627 [2024-07-15 22:42:57.413910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.627 [2024-07-15 22:42:57.413917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.627 [2024-07-15 22:42:57.414089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.627 [2024-07-15 22:42:57.414265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.627 [2024-07-15 22:42:57.414274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.627 [2024-07-15 22:42:57.414282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.627 [2024-07-15 22:42:57.416963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.627 [2024-07-15 22:42:57.426303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.627 [2024-07-15 22:42:57.426711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.627 [2024-07-15 22:42:57.426726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.627 [2024-07-15 22:42:57.426733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.627 [2024-07-15 22:42:57.426908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.627 [2024-07-15 22:42:57.427080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.627 [2024-07-15 22:42:57.427088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.627 [2024-07-15 22:42:57.427094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.627 [2024-07-15 22:42:57.429789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.627 [2024-07-15 22:42:57.439302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.627 [2024-07-15 22:42:57.439700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.627 [2024-07-15 22:42:57.439717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.627 [2024-07-15 22:42:57.439724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.627 [2024-07-15 22:42:57.439897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.627 [2024-07-15 22:42:57.440070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.627 [2024-07-15 22:42:57.440079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.627 [2024-07-15 22:42:57.440085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.627 [2024-07-15 22:42:57.442779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.627 [2024-07-15 22:42:57.452191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.627 [2024-07-15 22:42:57.452594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.627 [2024-07-15 22:42:57.452610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.627 [2024-07-15 22:42:57.452617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.627 [2024-07-15 22:42:57.452789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.627 [2024-07-15 22:42:57.452962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.627 [2024-07-15 22:42:57.452970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.627 [2024-07-15 22:42:57.452976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.627 [2024-07-15 22:42:57.455714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.627 [2024-07-15 22:42:57.465210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.627 [2024-07-15 22:42:57.465565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.627 [2024-07-15 22:42:57.465581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.627 [2024-07-15 22:42:57.465588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.627 [2024-07-15 22:42:57.465760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.627 [2024-07-15 22:42:57.465934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.627 [2024-07-15 22:42:57.465942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.627 [2024-07-15 22:42:57.465953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.627 [2024-07-15 22:42:57.468714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.627 [2024-07-15 22:42:57.478310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.627 [2024-07-15 22:42:57.478649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.627 [2024-07-15 22:42:57.478665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.627 [2024-07-15 22:42:57.478672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.627 [2024-07-15 22:42:57.478848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.627 [2024-07-15 22:42:57.479026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.627 [2024-07-15 22:42:57.479034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.627 [2024-07-15 22:42:57.479041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.627 [2024-07-15 22:42:57.481850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.627 [2024-07-15 22:42:57.491308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.627 [2024-07-15 22:42:57.491694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.627 [2024-07-15 22:42:57.491710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.627 [2024-07-15 22:42:57.491717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.627 [2024-07-15 22:42:57.491889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.627 [2024-07-15 22:42:57.492061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.627 [2024-07-15 22:42:57.492069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.627 [2024-07-15 22:42:57.492075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.627 [2024-07-15 22:42:57.494766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.627 [2024-07-15 22:42:57.504353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.627 [2024-07-15 22:42:57.504703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.627 [2024-07-15 22:42:57.504719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.627 [2024-07-15 22:42:57.504727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.627 [2024-07-15 22:42:57.504899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.627 [2024-07-15 22:42:57.505071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.627 [2024-07-15 22:42:57.505079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.627 [2024-07-15 22:42:57.505086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.627 [2024-07-15 22:42:57.507839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.627 [2024-07-15 22:42:57.517261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.627 [2024-07-15 22:42:57.517609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.627 [2024-07-15 22:42:57.517664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.627 [2024-07-15 22:42:57.517685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.627 [2024-07-15 22:42:57.518213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.627 [2024-07-15 22:42:57.518394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.627 [2024-07-15 22:42:57.518403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.627 [2024-07-15 22:42:57.518409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.627 [2024-07-15 22:42:57.521095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.627 [2024-07-15 22:42:57.530189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.627 [2024-07-15 22:42:57.530689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.627 [2024-07-15 22:42:57.530732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.627 [2024-07-15 22:42:57.530753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.627 [2024-07-15 22:42:57.531241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.627 [2024-07-15 22:42:57.531415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.628 [2024-07-15 22:42:57.531423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.628 [2024-07-15 22:42:57.531429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.628 [2024-07-15 22:42:57.534151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.628 [2024-07-15 22:42:57.543185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.628 [2024-07-15 22:42:57.543621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.628 [2024-07-15 22:42:57.543663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.628 [2024-07-15 22:42:57.543684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.628 [2024-07-15 22:42:57.544275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.628 [2024-07-15 22:42:57.544737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.628 [2024-07-15 22:42:57.544745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.628 [2024-07-15 22:42:57.544752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.628 [2024-07-15 22:42:57.547433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.628 [2024-07-15 22:42:57.556032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.628 [2024-07-15 22:42:57.556505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.628 [2024-07-15 22:42:57.556520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.628 [2024-07-15 22:42:57.556527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.628 [2024-07-15 22:42:57.556699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.628 [2024-07-15 22:42:57.556873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.628 [2024-07-15 22:42:57.556881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.628 [2024-07-15 22:42:57.556887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.628 [2024-07-15 22:42:57.559575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.628 [2024-07-15 22:42:57.568995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.628 [2024-07-15 22:42:57.569451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.628 [2024-07-15 22:42:57.569467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.628 [2024-07-15 22:42:57.569474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.628 [2024-07-15 22:42:57.569645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.628 [2024-07-15 22:42:57.569817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.628 [2024-07-15 22:42:57.569824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.628 [2024-07-15 22:42:57.569830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.628 [2024-07-15 22:42:57.572529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.628 [2024-07-15 22:42:57.581836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.628 [2024-07-15 22:42:57.582302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.628 [2024-07-15 22:42:57.582318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.628 [2024-07-15 22:42:57.582325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.628 [2024-07-15 22:42:57.582496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.628 [2024-07-15 22:42:57.582668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.628 [2024-07-15 22:42:57.582675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.628 [2024-07-15 22:42:57.582681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.628 [2024-07-15 22:42:57.585389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.628 [2024-07-15 22:42:57.594941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.889 [2024-07-15 22:42:57.595351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.889 [2024-07-15 22:42:57.595368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.889 [2024-07-15 22:42:57.595386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.889 [2024-07-15 22:42:57.595558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.889 [2024-07-15 22:42:57.595731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.889 [2024-07-15 22:42:57.595739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.889 [2024-07-15 22:42:57.595745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.889 [2024-07-15 22:42:57.598502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.889 [2024-07-15 22:42:57.607872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.889 [2024-07-15 22:42:57.608285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.889 [2024-07-15 22:42:57.608301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.889 [2024-07-15 22:42:57.608307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.889 [2024-07-15 22:42:57.608469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.889 [2024-07-15 22:42:57.608631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.889 [2024-07-15 22:42:57.608639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.889 [2024-07-15 22:42:57.608644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.889 [2024-07-15 22:42:57.611335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.889 [2024-07-15 22:42:57.620722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.889 [2024-07-15 22:42:57.621121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.889 [2024-07-15 22:42:57.621137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.889 [2024-07-15 22:42:57.621144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.889 [2024-07-15 22:42:57.621326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.889 [2024-07-15 22:42:57.621504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.889 [2024-07-15 22:42:57.621512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.889 [2024-07-15 22:42:57.621518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.889 [2024-07-15 22:42:57.624351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.889 [2024-07-15 22:42:57.633821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.889 [2024-07-15 22:42:57.634308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.889 [2024-07-15 22:42:57.634350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.889 [2024-07-15 22:42:57.634371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.889 [2024-07-15 22:42:57.634808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.889 [2024-07-15 22:42:57.634980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.889 [2024-07-15 22:42:57.634988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.889 [2024-07-15 22:42:57.634994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.889 [2024-07-15 22:42:57.637792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.889 [2024-07-15 22:42:57.646904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.889 [2024-07-15 22:42:57.647363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.889 [2024-07-15 22:42:57.647405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.889 [2024-07-15 22:42:57.647433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.889 [2024-07-15 22:42:57.647783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.889 [2024-07-15 22:42:57.647955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.889 [2024-07-15 22:42:57.647963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.889 [2024-07-15 22:42:57.647969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.889 [2024-07-15 22:42:57.650772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.889 [2024-07-15 22:42:57.659838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.889 [2024-07-15 22:42:57.660316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.889 [2024-07-15 22:42:57.660358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.889 [2024-07-15 22:42:57.660379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.889 [2024-07-15 22:42:57.660817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.889 [2024-07-15 22:42:57.660989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.889 [2024-07-15 22:42:57.660997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.889 [2024-07-15 22:42:57.661003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.889 [2024-07-15 22:42:57.663690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.889 [2024-07-15 22:42:57.672623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.889 [2024-07-15 22:42:57.673084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.889 [2024-07-15 22:42:57.673126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.889 [2024-07-15 22:42:57.673147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.889 [2024-07-15 22:42:57.673692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.889 [2024-07-15 22:42:57.673865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.889 [2024-07-15 22:42:57.673873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.889 [2024-07-15 22:42:57.673879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.889 [2024-07-15 22:42:57.676562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.889 [2024-07-15 22:42:57.685416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.889 [2024-07-15 22:42:57.685880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.889 [2024-07-15 22:42:57.685896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.889 [2024-07-15 22:42:57.685903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.889 [2024-07-15 22:42:57.686074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.889 [2024-07-15 22:42:57.686253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.889 [2024-07-15 22:42:57.686264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.889 [2024-07-15 22:42:57.686270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.889 [2024-07-15 22:42:57.688947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.889 [2024-07-15 22:42:57.698390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.889 [2024-07-15 22:42:57.698852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.889 [2024-07-15 22:42:57.698867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.889 [2024-07-15 22:42:57.698874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.889 [2024-07-15 22:42:57.699045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.889 [2024-07-15 22:42:57.699216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.889 [2024-07-15 22:42:57.699230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.889 [2024-07-15 22:42:57.699238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.889 [2024-07-15 22:42:57.701913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.889 [2024-07-15 22:42:57.711282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.889 [2024-07-15 22:42:57.711675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.889 [2024-07-15 22:42:57.711691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.889 [2024-07-15 22:42:57.711698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.889 [2024-07-15 22:42:57.711870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.889 [2024-07-15 22:42:57.712042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.889 [2024-07-15 22:42:57.712050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.889 [2024-07-15 22:42:57.712056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.889 [2024-07-15 22:42:57.714746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.889 [2024-07-15 22:42:57.724126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.889 [2024-07-15 22:42:57.724588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.889 [2024-07-15 22:42:57.724604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.889 [2024-07-15 22:42:57.724611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.889 [2024-07-15 22:42:57.724782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.889 [2024-07-15 22:42:57.724954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.889 [2024-07-15 22:42:57.724962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.890 [2024-07-15 22:42:57.724968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.890 [2024-07-15 22:42:57.727655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.890 [2024-07-15 22:42:57.737030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.890 [2024-07-15 22:42:57.737499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.890 [2024-07-15 22:42:57.737515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.890 [2024-07-15 22:42:57.737521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.890 [2024-07-15 22:42:57.737693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.890 [2024-07-15 22:42:57.737865] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.890 [2024-07-15 22:42:57.737872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.890 [2024-07-15 22:42:57.737878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.890 [2024-07-15 22:42:57.740577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.890 [2024-07-15 22:42:57.749958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.890 [2024-07-15 22:42:57.750431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.890 [2024-07-15 22:42:57.750473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.890 [2024-07-15 22:42:57.750493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.890 [2024-07-15 22:42:57.751004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.890 [2024-07-15 22:42:57.751176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.890 [2024-07-15 22:42:57.751184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.890 [2024-07-15 22:42:57.751190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.890 [2024-07-15 22:42:57.753872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.890 [2024-07-15 22:42:57.762807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.890 [2024-07-15 22:42:57.763289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.890 [2024-07-15 22:42:57.763332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.890 [2024-07-15 22:42:57.763353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.890 [2024-07-15 22:42:57.763882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.890 [2024-07-15 22:42:57.764054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.890 [2024-07-15 22:42:57.764062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.890 [2024-07-15 22:42:57.764068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.890 [2024-07-15 22:42:57.766809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.890 [2024-07-15 22:42:57.775738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.890 [2024-07-15 22:42:57.776181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.890 [2024-07-15 22:42:57.776196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.890 [2024-07-15 22:42:57.776202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.890 [2024-07-15 22:42:57.776397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.890 [2024-07-15 22:42:57.776570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.890 [2024-07-15 22:42:57.776577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.890 [2024-07-15 22:42:57.776583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.890 [2024-07-15 22:42:57.779264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.890 [2024-07-15 22:42:57.788598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.890 [2024-07-15 22:42:57.789047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.890 [2024-07-15 22:42:57.789091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.890 [2024-07-15 22:42:57.789112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.890 [2024-07-15 22:42:57.789705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.890 [2024-07-15 22:42:57.790199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.890 [2024-07-15 22:42:57.790207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.890 [2024-07-15 22:42:57.790213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.890 [2024-07-15 22:42:57.792942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.890 [2024-07-15 22:42:57.801584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.890 [2024-07-15 22:42:57.802051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.890 [2024-07-15 22:42:57.802067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.890 [2024-07-15 22:42:57.802073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.890 [2024-07-15 22:42:57.802252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.890 [2024-07-15 22:42:57.802425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.890 [2024-07-15 22:42:57.802433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.890 [2024-07-15 22:42:57.802438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.890 [2024-07-15 22:42:57.805113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.890 [2024-07-15 22:42:57.814465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.890 [2024-07-15 22:42:57.814952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.890 [2024-07-15 22:42:57.814994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.890 [2024-07-15 22:42:57.815016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.890 [2024-07-15 22:42:57.815608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.890 [2024-07-15 22:42:57.816050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.890 [2024-07-15 22:42:57.816057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.890 [2024-07-15 22:42:57.816067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.890 [2024-07-15 22:42:57.820097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.890 [2024-07-15 22:42:57.828094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.890 [2024-07-15 22:42:57.828486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.890 [2024-07-15 22:42:57.828502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.890 [2024-07-15 22:42:57.828508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.890 [2024-07-15 22:42:57.828680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.890 [2024-07-15 22:42:57.828852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.890 [2024-07-15 22:42:57.828859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.890 [2024-07-15 22:42:57.828865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.890 [2024-07-15 22:42:57.831597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.890 [2024-07-15 22:42:57.840946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.891 [2024-07-15 22:42:57.841418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.891 [2024-07-15 22:42:57.841461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.891 [2024-07-15 22:42:57.841483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.891 [2024-07-15 22:42:57.842062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.891 [2024-07-15 22:42:57.842435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.891 [2024-07-15 22:42:57.842443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.891 [2024-07-15 22:42:57.842449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.891 [2024-07-15 22:42:57.845125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.891 [2024-07-15 22:42:57.853934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.891 [2024-07-15 22:42:57.854359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.891 [2024-07-15 22:42:57.854392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:33.891 [2024-07-15 22:42:57.854414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:33.891 [2024-07-15 22:42:57.855003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:33.891 [2024-07-15 22:42:57.855180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.891 [2024-07-15 22:42:57.855188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.891 [2024-07-15 22:42:57.855195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.152 [2024-07-15 22:42:57.857992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.152 [2024-07-15 22:42:57.866941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.152 [2024-07-15 22:42:57.867405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.152 [2024-07-15 22:42:57.867446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.152 [2024-07-15 22:42:57.867467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.152 [2024-07-15 22:42:57.868046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.152 [2024-07-15 22:42:57.868274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.152 [2024-07-15 22:42:57.868282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.152 [2024-07-15 22:42:57.868288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.152 [2024-07-15 22:42:57.870970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.152 [2024-07-15 22:42:57.880003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.152 [2024-07-15 22:42:57.880480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.152 [2024-07-15 22:42:57.880496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.152 [2024-07-15 22:42:57.880503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.152 [2024-07-15 22:42:57.880680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.152 [2024-07-15 22:42:57.880857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.152 [2024-07-15 22:42:57.880865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.152 [2024-07-15 22:42:57.880871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.152 [2024-07-15 22:42:57.883668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.152 [2024-07-15 22:42:57.892933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.152 [2024-07-15 22:42:57.893385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.152 [2024-07-15 22:42:57.893440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.152 [2024-07-15 22:42:57.893462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.152 [2024-07-15 22:42:57.894042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.152 [2024-07-15 22:42:57.894282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.152 [2024-07-15 22:42:57.894290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.152 [2024-07-15 22:42:57.894296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.152 [2024-07-15 22:42:57.897038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.152 [2024-07-15 22:42:57.906042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.152 [2024-07-15 22:42:57.906499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.152 [2024-07-15 22:42:57.906514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.152 [2024-07-15 22:42:57.906521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.152 [2024-07-15 22:42:57.906692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.152 [2024-07-15 22:42:57.906869] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.152 [2024-07-15 22:42:57.906878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.152 [2024-07-15 22:42:57.906884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.152 [2024-07-15 22:42:57.909635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.152 [2024-07-15 22:42:57.918887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.152 [2024-07-15 22:42:57.919330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.152 [2024-07-15 22:42:57.919346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.152 [2024-07-15 22:42:57.919352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.152 [2024-07-15 22:42:57.919515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.152 [2024-07-15 22:42:57.919677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.152 [2024-07-15 22:42:57.919685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.152 [2024-07-15 22:42:57.919691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.152 [2024-07-15 22:42:57.922383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.152 [2024-07-15 22:42:57.931759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.152 [2024-07-15 22:42:57.932243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.152 [2024-07-15 22:42:57.932285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.152 [2024-07-15 22:42:57.932306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.152 [2024-07-15 22:42:57.932701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.152 [2024-07-15 22:42:57.932872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.152 [2024-07-15 22:42:57.932880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.152 [2024-07-15 22:42:57.932886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.152 [2024-07-15 22:42:57.935579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.152 [2024-07-15 22:42:57.944614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.152 [2024-07-15 22:42:57.945059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.152 [2024-07-15 22:42:57.945100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.153 [2024-07-15 22:42:57.945122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.153 [2024-07-15 22:42:57.945570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.153 [2024-07-15 22:42:57.945744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.153 [2024-07-15 22:42:57.945751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.153 [2024-07-15 22:42:57.945757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.153 [2024-07-15 22:42:57.948437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.153 [2024-07-15 22:42:57.957618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.153 [2024-07-15 22:42:57.958066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.153 [2024-07-15 22:42:57.958082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.153 [2024-07-15 22:42:57.958089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.153 [2024-07-15 22:42:57.958274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.153 [2024-07-15 22:42:57.958447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.153 [2024-07-15 22:42:57.958455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.153 [2024-07-15 22:42:57.958461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.153 [2024-07-15 22:42:57.961141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.153 [2024-07-15 22:42:57.970435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.153 [2024-07-15 22:42:57.970879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.153 [2024-07-15 22:42:57.970895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.153 [2024-07-15 22:42:57.970902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.153 [2024-07-15 22:42:57.971074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.153 [2024-07-15 22:42:57.971252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.153 [2024-07-15 22:42:57.971260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.153 [2024-07-15 22:42:57.971267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.153 [2024-07-15 22:42:57.973944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.153 [2024-07-15 22:42:57.983299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.153 [2024-07-15 22:42:57.983763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.153 [2024-07-15 22:42:57.983779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.153 [2024-07-15 22:42:57.983785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.153 [2024-07-15 22:42:57.983958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.153 [2024-07-15 22:42:57.984129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.153 [2024-07-15 22:42:57.984137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.153 [2024-07-15 22:42:57.984143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.153 [2024-07-15 22:42:57.986826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.153 [2024-07-15 22:42:57.996223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.153 [2024-07-15 22:42:57.996644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.153 [2024-07-15 22:42:57.996660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.153 [2024-07-15 22:42:57.996670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.153 [2024-07-15 22:42:57.996842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.153 [2024-07-15 22:42:57.997014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.153 [2024-07-15 22:42:57.997022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.153 [2024-07-15 22:42:57.997029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.153 [2024-07-15 22:42:57.999776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.153 [2024-07-15 22:42:58.009177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.153 [2024-07-15 22:42:58.009669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.153 [2024-07-15 22:42:58.009711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.153 [2024-07-15 22:42:58.009732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.153 [2024-07-15 22:42:58.010323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.153 [2024-07-15 22:42:58.010801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.153 [2024-07-15 22:42:58.010809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.153 [2024-07-15 22:42:58.010815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.153 [2024-07-15 22:42:58.013499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.153 [2024-07-15 22:42:58.022092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.153 [2024-07-15 22:42:58.022499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.153 [2024-07-15 22:42:58.022516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.153 [2024-07-15 22:42:58.022523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.153 [2024-07-15 22:42:58.022695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.153 [2024-07-15 22:42:58.022867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.153 [2024-07-15 22:42:58.022875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.153 [2024-07-15 22:42:58.022881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.153 [2024-07-15 22:42:58.025663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.153 [2024-07-15 22:42:58.035009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.153 [2024-07-15 22:42:58.035480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.153 [2024-07-15 22:42:58.035523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.153 [2024-07-15 22:42:58.035543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.153 [2024-07-15 22:42:58.036123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.153 [2024-07-15 22:42:58.036714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.153 [2024-07-15 22:42:58.036747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.153 [2024-07-15 22:42:58.036767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.153 [2024-07-15 22:42:58.039447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.153 [2024-07-15 22:42:58.047962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.153 [2024-07-15 22:42:58.048450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.153 [2024-07-15 22:42:58.048466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.153 [2024-07-15 22:42:58.048473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.153 [2024-07-15 22:42:58.048645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.153 [2024-07-15 22:42:58.048817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.153 [2024-07-15 22:42:58.048825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.153 [2024-07-15 22:42:58.048831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.153 [2024-07-15 22:42:58.051532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.153 [2024-07-15 22:42:58.060767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.153 [2024-07-15 22:42:58.061236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.153 [2024-07-15 22:42:58.061279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.153 [2024-07-15 22:42:58.061300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.153 [2024-07-15 22:42:58.061779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.153 [2024-07-15 22:42:58.061952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.153 [2024-07-15 22:42:58.061959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.153 [2024-07-15 22:42:58.061965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.153 [2024-07-15 22:42:58.064711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.153 [2024-07-15 22:42:58.073669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.153 [2024-07-15 22:42:58.074159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.153 [2024-07-15 22:42:58.074201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.153 [2024-07-15 22:42:58.074222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.153 [2024-07-15 22:42:58.074742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.153 [2024-07-15 22:42:58.074926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.153 [2024-07-15 22:42:58.074934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.153 [2024-07-15 22:42:58.074941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.153 [2024-07-15 22:42:58.077657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.154 [2024-07-15 22:42:58.086485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.154 [2024-07-15 22:42:58.086936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.154 [2024-07-15 22:42:58.086989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.154 [2024-07-15 22:42:58.087011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.154 [2024-07-15 22:42:58.087608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.154 [2024-07-15 22:42:58.087781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.154 [2024-07-15 22:42:58.087789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.154 [2024-07-15 22:42:58.087795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.154 [2024-07-15 22:42:58.090477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.154 [2024-07-15 22:42:58.099340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.154 [2024-07-15 22:42:58.099836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.154 [2024-07-15 22:42:58.099850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.154 [2024-07-15 22:42:58.099857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.154 [2024-07-15 22:42:58.100019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.154 [2024-07-15 22:42:58.100182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.154 [2024-07-15 22:42:58.100189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.154 [2024-07-15 22:42:58.100195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.154 [2024-07-15 22:42:58.102891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.154 [2024-07-15 22:42:58.112262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.154 [2024-07-15 22:42:58.112765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.154 [2024-07-15 22:42:58.112808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.154 [2024-07-15 22:42:58.112829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.154 [2024-07-15 22:42:58.113422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.154 [2024-07-15 22:42:58.113927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.154 [2024-07-15 22:42:58.113935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.154 [2024-07-15 22:42:58.113941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.154 [2024-07-15 22:42:58.116715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.415 [2024-07-15 22:42:58.125259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.415 [2024-07-15 22:42:58.125762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.415 [2024-07-15 22:42:58.125779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.415 [2024-07-15 22:42:58.125786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.415 [2024-07-15 22:42:58.125965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.415 [2024-07-15 22:42:58.126143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.415 [2024-07-15 22:42:58.126151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.415 [2024-07-15 22:42:58.126157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.415 [2024-07-15 22:42:58.128992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.415 [2024-07-15 22:42:58.138296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.415 [2024-07-15 22:42:58.138655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.415 [2024-07-15 22:42:58.138672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.415 [2024-07-15 22:42:58.138678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.415 [2024-07-15 22:42:58.138855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.415 [2024-07-15 22:42:58.139032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.415 [2024-07-15 22:42:58.139040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.415 [2024-07-15 22:42:58.139047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.415 [2024-07-15 22:42:58.141750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.415 [2024-07-15 22:42:58.151157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.415 [2024-07-15 22:42:58.151652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.415 [2024-07-15 22:42:58.151668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.415 [2024-07-15 22:42:58.151674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.415 [2024-07-15 22:42:58.151845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.415 [2024-07-15 22:42:58.152017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.415 [2024-07-15 22:42:58.152025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.415 [2024-07-15 22:42:58.152031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.415 [2024-07-15 22:42:58.154750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.415 [2024-07-15 22:42:58.163981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.415 [2024-07-15 22:42:58.164380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.415 [2024-07-15 22:42:58.164395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.415 [2024-07-15 22:42:58.164401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.415 [2024-07-15 22:42:58.164563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.415 [2024-07-15 22:42:58.164725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.415 [2024-07-15 22:42:58.164733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.415 [2024-07-15 22:42:58.164745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.415 [2024-07-15 22:42:58.167452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.415 [2024-07-15 22:42:58.176838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.415 [2024-07-15 22:42:58.177173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.415 [2024-07-15 22:42:58.177188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.415 [2024-07-15 22:42:58.177194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.415 [2024-07-15 22:42:58.177384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.415 [2024-07-15 22:42:58.177557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.415 [2024-07-15 22:42:58.177565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.415 [2024-07-15 22:42:58.177571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.415 [2024-07-15 22:42:58.180251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.415 [2024-07-15 22:42:58.189636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.415 [2024-07-15 22:42:58.190098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.415 [2024-07-15 22:42:58.190139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.415 [2024-07-15 22:42:58.190161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.415 [2024-07-15 22:42:58.190754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.415 [2024-07-15 22:42:58.191216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.415 [2024-07-15 22:42:58.191228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.415 [2024-07-15 22:42:58.191235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.415 [2024-07-15 22:42:58.193912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.415 [2024-07-15 22:42:58.202442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.415 [2024-07-15 22:42:58.202916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.415 [2024-07-15 22:42:58.202931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.415 [2024-07-15 22:42:58.202938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.415 [2024-07-15 22:42:58.203100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.415 [2024-07-15 22:42:58.203284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.415 [2024-07-15 22:42:58.203292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.415 [2024-07-15 22:42:58.203298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.415 [2024-07-15 22:42:58.205978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.415 [2024-07-15 22:42:58.215328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.415 [2024-07-15 22:42:58.215811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.415 [2024-07-15 22:42:58.215853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.415 [2024-07-15 22:42:58.215874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.415 [2024-07-15 22:42:58.216467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.415 [2024-07-15 22:42:58.217055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.415 [2024-07-15 22:42:58.217078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.415 [2024-07-15 22:42:58.217099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.415 [2024-07-15 22:42:58.219829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.415 [2024-07-15 22:42:58.228226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.415 [2024-07-15 22:42:58.228691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.415 [2024-07-15 22:42:58.228707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.415 [2024-07-15 22:42:58.228713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.415 [2024-07-15 22:42:58.228875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.415 [2024-07-15 22:42:58.229037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.415 [2024-07-15 22:42:58.229044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.415 [2024-07-15 22:42:58.229050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.416 [2024-07-15 22:42:58.231734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.416 [2024-07-15 22:42:58.241073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.416 [2024-07-15 22:42:58.241558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.416 [2024-07-15 22:42:58.241573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.416 [2024-07-15 22:42:58.241579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.416 [2024-07-15 22:42:58.241750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.416 [2024-07-15 22:42:58.241923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.416 [2024-07-15 22:42:58.241930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.416 [2024-07-15 22:42:58.241937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.416 [2024-07-15 22:42:58.244638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.416 [2024-07-15 22:42:58.253866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.416 [2024-07-15 22:42:58.254342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.416 [2024-07-15 22:42:58.254385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.416 [2024-07-15 22:42:58.254406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.416 [2024-07-15 22:42:58.254960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.416 [2024-07-15 22:42:58.255136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.416 [2024-07-15 22:42:58.255144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.416 [2024-07-15 22:42:58.255150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.416 [2024-07-15 22:42:58.257834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.416 [2024-07-15 22:42:58.266733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.416 [2024-07-15 22:42:58.267204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.416 [2024-07-15 22:42:58.267218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.416 [2024-07-15 22:42:58.267231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.416 [2024-07-15 22:42:58.267423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.416 [2024-07-15 22:42:58.267600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.416 [2024-07-15 22:42:58.267608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.416 [2024-07-15 22:42:58.267614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.416 [2024-07-15 22:42:58.270310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.416 [2024-07-15 22:42:58.279655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.416 [2024-07-15 22:42:58.280042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.416 [2024-07-15 22:42:58.280058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.416 [2024-07-15 22:42:58.280064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.416 [2024-07-15 22:42:58.280241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.416 [2024-07-15 22:42:58.280413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.416 [2024-07-15 22:42:58.280421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.416 [2024-07-15 22:42:58.280427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.416 [2024-07-15 22:42:58.283103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.416 [2024-07-15 22:42:58.292445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.416 [2024-07-15 22:42:58.292939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.416 [2024-07-15 22:42:58.292955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.416 [2024-07-15 22:42:58.292961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.416 [2024-07-15 22:42:58.293134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.416 [2024-07-15 22:42:58.293310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.416 [2024-07-15 22:42:58.293318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.416 [2024-07-15 22:42:58.293324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.416 [2024-07-15 22:42:58.295994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.416 [2024-07-15 22:42:58.305399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.416 [2024-07-15 22:42:58.305900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.416 [2024-07-15 22:42:58.305942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.416 [2024-07-15 22:42:58.305964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.416 [2024-07-15 22:42:58.306534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.416 [2024-07-15 22:42:58.306712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.416 [2024-07-15 22:42:58.306720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.416 [2024-07-15 22:42:58.306726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.416 [2024-07-15 22:42:58.309518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.416 [2024-07-15 22:42:58.318229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.416 [2024-07-15 22:42:58.318720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.416 [2024-07-15 22:42:58.318736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.416 [2024-07-15 22:42:58.318742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.416 [2024-07-15 22:42:58.318914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.416 [2024-07-15 22:42:58.319086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.416 [2024-07-15 22:42:58.319094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.416 [2024-07-15 22:42:58.319100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.416 [2024-07-15 22:42:58.321870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.416 [2024-07-15 22:42:58.331171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.416 [2024-07-15 22:42:58.331671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.416 [2024-07-15 22:42:58.331714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.416 [2024-07-15 22:42:58.331738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.416 [2024-07-15 22:42:58.332246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.416 [2024-07-15 22:42:58.332421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.416 [2024-07-15 22:42:58.332428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.416 [2024-07-15 22:42:58.332435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.416 [2024-07-15 22:42:58.335180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.416 [2024-07-15 22:42:58.344196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.416 [2024-07-15 22:42:58.344696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.416 [2024-07-15 22:42:58.344738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.416 [2024-07-15 22:42:58.344766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.416 [2024-07-15 22:42:58.345288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.416 [2024-07-15 22:42:58.345461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.416 [2024-07-15 22:42:58.345468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.416 [2024-07-15 22:42:58.345474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.416 [2024-07-15 22:42:58.348189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.416 [2024-07-15 22:42:58.357214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.416 [2024-07-15 22:42:58.357677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.416 [2024-07-15 22:42:58.357697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.416 [2024-07-15 22:42:58.357703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.416 [2024-07-15 22:42:58.357866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.416 [2024-07-15 22:42:58.358029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.416 [2024-07-15 22:42:58.358036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.416 [2024-07-15 22:42:58.358042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.416 [2024-07-15 22:42:58.360796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.416 [2024-07-15 22:42:58.370162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.416 [2024-07-15 22:42:58.370638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.416 [2024-07-15 22:42:58.370680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.416 [2024-07-15 22:42:58.370701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.416 [2024-07-15 22:42:58.371236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.416 [2024-07-15 22:42:58.371490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.417 [2024-07-15 22:42:58.371501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.417 [2024-07-15 22:42:58.371509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.417 [2024-07-15 22:42:58.375575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.676 [2024-07-15 22:42:58.383769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.676 [2024-07-15 22:42:58.384235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.676 [2024-07-15 22:42:58.384251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.676 [2024-07-15 22:42:58.384259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.676 [2024-07-15 22:42:58.384436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.676 [2024-07-15 22:42:58.384613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.676 [2024-07-15 22:42:58.384624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.676 [2024-07-15 22:42:58.384631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.676 [2024-07-15 22:42:58.387478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.676 [2024-07-15 22:42:58.396702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.676 [2024-07-15 22:42:58.397188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.397245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.397268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.397666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.397838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.397846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.397852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.400660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.409665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.410174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.410215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.410251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.410492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.410664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.410671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.410678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.413358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.422544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.423022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.423064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.423085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.423527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.423700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.423707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.423714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.426450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.435474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.435964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.435979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.435985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.436157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.436353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.436362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.436368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.439069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.448263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.448658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.448697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.448718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.449312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.449793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.449801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.449807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.452509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.461145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.461640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.461655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.461661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.461834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.462006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.462013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.462019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.464710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.473968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.474476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.474517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.474539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.475125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.475731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.475756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.475776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.478548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.487045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.487480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.487517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.487540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.488118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.488714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.488740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.488761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.491556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.499967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.500443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.500459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.500466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.500627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.500790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.500797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.500803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.503506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.512818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.513271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.513327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.513349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.513868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.514041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.514049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.514059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.516791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.525818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.526301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.526343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.526364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.526903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.527076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.527084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.527090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.529859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.538770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.539154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.539169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.539176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.539352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.539525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.539533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.539539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.542293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.551770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.552211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.552230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.552237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.552408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.552580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.552588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.552594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.555310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.564661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.565020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.565061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.565083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.565673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.565937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.565945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.565951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.677 [2024-07-15 22:42:58.568706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.677 [2024-07-15 22:42:58.577582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.677 [2024-07-15 22:42:58.577926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.677 [2024-07-15 22:42:58.577968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.677 [2024-07-15 22:42:58.577989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.677 [2024-07-15 22:42:58.578577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.677 [2024-07-15 22:42:58.579128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.677 [2024-07-15 22:42:58.579137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.677 [2024-07-15 22:42:58.579142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.678 [2024-07-15 22:42:58.581897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.678 [2024-07-15 22:42:58.590749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.678 [2024-07-15 22:42:58.591155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.678 [2024-07-15 22:42:58.591172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.678 [2024-07-15 22:42:58.591178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.678 [2024-07-15 22:42:58.591358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.678 [2024-07-15 22:42:58.591536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.678 [2024-07-15 22:42:58.591543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.678 [2024-07-15 22:42:58.591549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.678 [2024-07-15 22:42:58.594383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.678 [2024-07-15 22:42:58.603952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.678 [2024-07-15 22:42:58.604378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.678 [2024-07-15 22:42:58.604394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.678 [2024-07-15 22:42:58.604401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.678 [2024-07-15 22:42:58.604577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.678 [2024-07-15 22:42:58.604758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.678 [2024-07-15 22:42:58.604766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.678 [2024-07-15 22:42:58.604772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.678 [2024-07-15 22:42:58.607609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.678 [2024-07-15 22:42:58.617150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.678 [2024-07-15 22:42:58.617634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.678 [2024-07-15 22:42:58.617650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.678 [2024-07-15 22:42:58.617657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.678 [2024-07-15 22:42:58.617834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.678 [2024-07-15 22:42:58.618011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.678 [2024-07-15 22:42:58.618019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.678 [2024-07-15 22:42:58.618025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.678 [2024-07-15 22:42:58.620864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.678 [2024-07-15 22:42:58.630242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.678 [2024-07-15 22:42:58.630724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.678 [2024-07-15 22:42:58.630740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.678 [2024-07-15 22:42:58.630747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.678 [2024-07-15 22:42:58.630923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.678 [2024-07-15 22:42:58.631101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.678 [2024-07-15 22:42:58.631108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.678 [2024-07-15 22:42:58.631115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.678 [2024-07-15 22:42:58.633998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.678 [2024-07-15 22:42:58.643329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.678 [2024-07-15 22:42:58.643824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.678 [2024-07-15 22:42:58.643840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.678 [2024-07-15 22:42:58.643847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.678 [2024-07-15 22:42:58.644023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.678 [2024-07-15 22:42:58.644200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.678 [2024-07-15 22:42:58.644208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.678 [2024-07-15 22:42:58.644214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.938 [2024-07-15 22:42:58.647055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.938 [2024-07-15 22:42:58.656420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.938 [2024-07-15 22:42:58.656906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.938 [2024-07-15 22:42:58.656921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.938 [2024-07-15 22:42:58.656928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.938 [2024-07-15 22:42:58.657104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.938 [2024-07-15 22:42:58.657286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.938 [2024-07-15 22:42:58.657294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.938 [2024-07-15 22:42:58.657300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.938 [2024-07-15 22:42:58.660131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.938 [2024-07-15 22:42:58.669566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.938 [2024-07-15 22:42:58.669963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.938 [2024-07-15 22:42:58.669981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.938 [2024-07-15 22:42:58.669989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.938 [2024-07-15 22:42:58.670165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.938 [2024-07-15 22:42:58.670346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.938 [2024-07-15 22:42:58.670355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.938 [2024-07-15 22:42:58.670361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.938 [2024-07-15 22:42:58.673191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.938 [2024-07-15 22:42:58.682736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.938 [2024-07-15 22:42:58.683217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.938 [2024-07-15 22:42:58.683237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.938 [2024-07-15 22:42:58.683245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.938 [2024-07-15 22:42:58.683421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.938 [2024-07-15 22:42:58.683597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.938 [2024-07-15 22:42:58.683605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.938 [2024-07-15 22:42:58.683611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.938 [2024-07-15 22:42:58.686445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.938 [2024-07-15 22:42:58.695806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.938 [2024-07-15 22:42:58.696285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.938 [2024-07-15 22:42:58.696301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.938 [2024-07-15 22:42:58.696311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.938 [2024-07-15 22:42:58.696488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.938 [2024-07-15 22:42:58.696664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.938 [2024-07-15 22:42:58.696672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.938 [2024-07-15 22:42:58.696678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.938 [2024-07-15 22:42:58.699555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.938 [2024-07-15 22:42:58.708849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.938 [2024-07-15 22:42:58.709353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.938 [2024-07-15 22:42:58.709370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.938 [2024-07-15 22:42:58.709377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.938 [2024-07-15 22:42:58.709565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.938 [2024-07-15 22:42:58.709742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.938 [2024-07-15 22:42:58.709749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.938 [2024-07-15 22:42:58.709756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.938 [2024-07-15 22:42:58.712587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.938 [2024-07-15 22:42:58.721953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.938 [2024-07-15 22:42:58.722444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.938 [2024-07-15 22:42:58.722461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.938 [2024-07-15 22:42:58.722468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.938 [2024-07-15 22:42:58.722645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.938 [2024-07-15 22:42:58.722822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.938 [2024-07-15 22:42:58.722829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.938 [2024-07-15 22:42:58.722835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.938 [2024-07-15 22:42:58.725673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.938 [2024-07-15 22:42:58.735076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.938 [2024-07-15 22:42:58.735542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.938 [2024-07-15 22:42:58.735558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.938 [2024-07-15 22:42:58.735565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.938 [2024-07-15 22:42:58.735741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.938 [2024-07-15 22:42:58.735918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.938 [2024-07-15 22:42:58.735928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.938 [2024-07-15 22:42:58.735935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.938 [2024-07-15 22:42:58.738770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.938 [2024-07-15 22:42:58.748138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.938 [2024-07-15 22:42:58.748618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.938 [2024-07-15 22:42:58.748634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.938 [2024-07-15 22:42:58.748641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.938 [2024-07-15 22:42:58.748816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.938 [2024-07-15 22:42:58.748993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.938 [2024-07-15 22:42:58.749000] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.938 [2024-07-15 22:42:58.749006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.938 [2024-07-15 22:42:58.751844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.938 [2024-07-15 22:42:58.761209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.938 [2024-07-15 22:42:58.761670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.938 [2024-07-15 22:42:58.761686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.938 [2024-07-15 22:42:58.761693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.938 [2024-07-15 22:42:58.761869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.938 [2024-07-15 22:42:58.762046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.938 [2024-07-15 22:42:58.762053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.938 [2024-07-15 22:42:58.762059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.938 [2024-07-15 22:42:58.764895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.938 [2024-07-15 22:42:58.774272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.938 [2024-07-15 22:42:58.774763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.938 [2024-07-15 22:42:58.774805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.938 [2024-07-15 22:42:58.774827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.938 [2024-07-15 22:42:58.775256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.938 [2024-07-15 22:42:58.775434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.938 [2024-07-15 22:42:58.775442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.938 [2024-07-15 22:42:58.775448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.938 [2024-07-15 22:42:58.778281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.939 [2024-07-15 22:42:58.787552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.939 [2024-07-15 22:42:58.787907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.939 [2024-07-15 22:42:58.787923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.939 [2024-07-15 22:42:58.787930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.939 [2024-07-15 22:42:58.788106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.939 [2024-07-15 22:42:58.788288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.939 [2024-07-15 22:42:58.788296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.939 [2024-07-15 22:42:58.788302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.939 [2024-07-15 22:42:58.791129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.939 [2024-07-15 22:42:58.800665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.939 [2024-07-15 22:42:58.801148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.939 [2024-07-15 22:42:58.801164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.939 [2024-07-15 22:42:58.801171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.939 [2024-07-15 22:42:58.801352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.939 [2024-07-15 22:42:58.801529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.939 [2024-07-15 22:42:58.801537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.939 [2024-07-15 22:42:58.801543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.939 [2024-07-15 22:42:58.804376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.939 [2024-07-15 22:42:58.813745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.939 [2024-07-15 22:42:58.814254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.939 [2024-07-15 22:42:58.814271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.939 [2024-07-15 22:42:58.814278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.939 [2024-07-15 22:42:58.814455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.939 [2024-07-15 22:42:58.814632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.939 [2024-07-15 22:42:58.814639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.939 [2024-07-15 22:42:58.814646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.939 [2024-07-15 22:42:58.817480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.939 [2024-07-15 22:42:58.826851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.939 [2024-07-15 22:42:58.827335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.939 [2024-07-15 22:42:58.827351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.939 [2024-07-15 22:42:58.827358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.939 [2024-07-15 22:42:58.827542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.939 [2024-07-15 22:42:58.827719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.939 [2024-07-15 22:42:58.827727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.939 [2024-07-15 22:42:58.827733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.939 [2024-07-15 22:42:58.830567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.939 [2024-07-15 22:42:58.839929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.939 [2024-07-15 22:42:58.840351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.939 [2024-07-15 22:42:58.840370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.939 [2024-07-15 22:42:58.840377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.939 [2024-07-15 22:42:58.840554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.939 [2024-07-15 22:42:58.840731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.939 [2024-07-15 22:42:58.840739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.939 [2024-07-15 22:42:58.840745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.939 [2024-07-15 22:42:58.843579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.939 [2024-07-15 22:42:58.853110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.939 [2024-07-15 22:42:58.853595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.939 [2024-07-15 22:42:58.853611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.939 [2024-07-15 22:42:58.853618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.939 [2024-07-15 22:42:58.853799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.939 [2024-07-15 22:42:58.853991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.939 [2024-07-15 22:42:58.853998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.939 [2024-07-15 22:42:58.854004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.939 [2024-07-15 22:42:58.856837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.939 [2024-07-15 22:42:58.866192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.939 [2024-07-15 22:42:58.866680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.939 [2024-07-15 22:42:58.866696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.939 [2024-07-15 22:42:58.866703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.939 [2024-07-15 22:42:58.866879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.939 [2024-07-15 22:42:58.867056] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.939 [2024-07-15 22:42:58.867063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.939 [2024-07-15 22:42:58.867072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.939 [2024-07-15 22:42:58.869959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.939 [2024-07-15 22:42:58.879336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.939 [2024-07-15 22:42:58.879822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.939 [2024-07-15 22:42:58.879838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.939 [2024-07-15 22:42:58.879845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.939 [2024-07-15 22:42:58.880022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.939 [2024-07-15 22:42:58.880199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.939 [2024-07-15 22:42:58.880206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.939 [2024-07-15 22:42:58.880213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.939 [2024-07-15 22:42:58.883049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.939 [2024-07-15 22:42:58.892644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.939 [2024-07-15 22:42:58.893153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.939 [2024-07-15 22:42:58.893169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.939 [2024-07-15 22:42:58.893176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.939 [2024-07-15 22:42:58.893363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.939 [2024-07-15 22:42:58.893546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.939 [2024-07-15 22:42:58.893554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.939 [2024-07-15 22:42:58.893560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.939 [2024-07-15 22:42:58.896480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.939 [2024-07-15 22:42:58.905846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.939 [2024-07-15 22:42:58.906331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.939 [2024-07-15 22:42:58.906347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:34.939 [2024-07-15 22:42:58.906355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:34.939 [2024-07-15 22:42:58.906536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:34.939 [2024-07-15 22:42:58.906718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.939 [2024-07-15 22:42:58.906726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.939 [2024-07-15 22:42:58.906733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.200 [2024-07-15 22:42:58.909631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.200 [2024-07-15 22:42:58.918903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.200 [2024-07-15 22:42:58.919319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.200 [2024-07-15 22:42:58.919335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.200 [2024-07-15 22:42:58.919342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.200 [2024-07-15 22:42:58.919518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.200 [2024-07-15 22:42:58.919695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.200 [2024-07-15 22:42:58.919702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.200 [2024-07-15 22:42:58.919709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.200 [2024-07-15 22:42:58.922551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.200 [2024-07-15 22:42:58.932087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.200 [2024-07-15 22:42:58.932492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.200 [2024-07-15 22:42:58.932508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.200 [2024-07-15 22:42:58.932515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.200 [2024-07-15 22:42:58.932691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.200 [2024-07-15 22:42:58.932868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.200 [2024-07-15 22:42:58.932876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.200 [2024-07-15 22:42:58.932882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.200 [2024-07-15 22:42:58.935691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.200 [2024-07-15 22:42:58.945137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.200 [2024-07-15 22:42:58.945547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.200 [2024-07-15 22:42:58.945589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.200 [2024-07-15 22:42:58.945611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.200 [2024-07-15 22:42:58.946112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.200 [2024-07-15 22:42:58.946289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.200 [2024-07-15 22:42:58.946297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.200 [2024-07-15 22:42:58.946303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.200 [2024-07-15 22:42:58.949049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.200 [2024-07-15 22:42:58.958113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.200 [2024-07-15 22:42:58.958596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.200 [2024-07-15 22:42:58.958642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.200 [2024-07-15 22:42:58.958664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.200 [2024-07-15 22:42:58.959120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.200 [2024-07-15 22:42:58.959300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.200 [2024-07-15 22:42:58.959308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.200 [2024-07-15 22:42:58.959314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.200 [2024-07-15 22:42:58.962033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.200 [2024-07-15 22:42:58.971094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.200 [2024-07-15 22:42:58.971598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.200 [2024-07-15 22:42:58.971642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.200 [2024-07-15 22:42:58.971664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.200 [2024-07-15 22:42:58.972067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.201 [2024-07-15 22:42:58.972244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.201 [2024-07-15 22:42:58.972252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.201 [2024-07-15 22:42:58.972258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.201 [2024-07-15 22:42:58.974930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.201 [2024-07-15 22:42:58.984013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.201 [2024-07-15 22:42:58.984483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.201 [2024-07-15 22:42:58.984499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.201 [2024-07-15 22:42:58.984506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.201 [2024-07-15 22:42:58.984679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.201 [2024-07-15 22:42:58.984852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.201 [2024-07-15 22:42:58.984859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.201 [2024-07-15 22:42:58.984866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.201 [2024-07-15 22:42:58.987561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.201 [2024-07-15 22:42:58.996863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.201 [2024-07-15 22:42:58.997340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.201 [2024-07-15 22:42:58.997383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.201 [2024-07-15 22:42:58.997405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.201 [2024-07-15 22:42:58.997901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.201 [2024-07-15 22:42:58.998064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.201 [2024-07-15 22:42:58.998071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.201 [2024-07-15 22:42:58.998077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.201 [2024-07-15 22:42:59.000777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.201 [2024-07-15 22:42:59.009700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.201 [2024-07-15 22:42:59.010191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.201 [2024-07-15 22:42:59.010244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.201 [2024-07-15 22:42:59.010266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.201 [2024-07-15 22:42:59.010824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.201 [2024-07-15 22:42:59.010996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.201 [2024-07-15 22:42:59.011003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.201 [2024-07-15 22:42:59.011009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.201 [2024-07-15 22:42:59.013701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.201 [2024-07-15 22:42:59.022626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.201 [2024-07-15 22:42:59.023131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.201 [2024-07-15 22:42:59.023172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.201 [2024-07-15 22:42:59.023193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.201 [2024-07-15 22:42:59.023786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.201 [2024-07-15 22:42:59.024229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.201 [2024-07-15 22:42:59.024237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.201 [2024-07-15 22:42:59.024243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.201 [2024-07-15 22:42:59.026914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.201 [2024-07-15 22:42:59.035438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.201 [2024-07-15 22:42:59.035936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.201 [2024-07-15 22:42:59.035951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.201 [2024-07-15 22:42:59.035958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.201 [2024-07-15 22:42:59.036129] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.201 [2024-07-15 22:42:59.036324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.201 [2024-07-15 22:42:59.036332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.201 [2024-07-15 22:42:59.036339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.201 [2024-07-15 22:42:59.039049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.201 [2024-07-15 22:42:59.048249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.201 [2024-07-15 22:42:59.048676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.201 [2024-07-15 22:42:59.048717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.201 [2024-07-15 22:42:59.048744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.201 [2024-07-15 22:42:59.049249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.201 [2024-07-15 22:42:59.049503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.201 [2024-07-15 22:42:59.049513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.201 [2024-07-15 22:42:59.049522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.201 [2024-07-15 22:42:59.053581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.201 [2024-07-15 22:42:59.061665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.201 [2024-07-15 22:42:59.062166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.201 [2024-07-15 22:42:59.062207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.201 [2024-07-15 22:42:59.062241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.201 [2024-07-15 22:42:59.062820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.201 [2024-07-15 22:42:59.063374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.201 [2024-07-15 22:42:59.063382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.201 [2024-07-15 22:42:59.063388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.201 [2024-07-15 22:42:59.066114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.201 [2024-07-15 22:42:59.074483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.201 [2024-07-15 22:42:59.074990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.201 [2024-07-15 22:42:59.075031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.201 [2024-07-15 22:42:59.075052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.201 [2024-07-15 22:42:59.075655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.201 [2024-07-15 22:42:59.076248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.201 [2024-07-15 22:42:59.076256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.201 [2024-07-15 22:42:59.076262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.201 [2024-07-15 22:42:59.078982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.201 [2024-07-15 22:42:59.087360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.201 [2024-07-15 22:42:59.087832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.201 [2024-07-15 22:42:59.087847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.201 [2024-07-15 22:42:59.087853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.201 [2024-07-15 22:42:59.088015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.202 [2024-07-15 22:42:59.088177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.202 [2024-07-15 22:42:59.088187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.202 [2024-07-15 22:42:59.088192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.202 [2024-07-15 22:42:59.090884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.202 [2024-07-15 22:42:59.100205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.202 [2024-07-15 22:42:59.100698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.202 [2024-07-15 22:42:59.100714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.202 [2024-07-15 22:42:59.100721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.202 [2024-07-15 22:42:59.100891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.202 [2024-07-15 22:42:59.101063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.202 [2024-07-15 22:42:59.101070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.202 [2024-07-15 22:42:59.101076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.202 [2024-07-15 22:42:59.103763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.202 [2024-07-15 22:42:59.113092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.202 [2024-07-15 22:42:59.113577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.202 [2024-07-15 22:42:59.113620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.202 [2024-07-15 22:42:59.113641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.202 [2024-07-15 22:42:59.114131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.202 [2024-07-15 22:42:59.114309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.202 [2024-07-15 22:42:59.114317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.202 [2024-07-15 22:42:59.114323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.202 [2024-07-15 22:42:59.116936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.202 [2024-07-15 22:42:59.126000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.202 [2024-07-15 22:42:59.126464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.202 [2024-07-15 22:42:59.126492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.202 [2024-07-15 22:42:59.126498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.202 [2024-07-15 22:42:59.126660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.202 [2024-07-15 22:42:59.126823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.202 [2024-07-15 22:42:59.126830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.202 [2024-07-15 22:42:59.126835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.202 [2024-07-15 22:42:59.129428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.202 [2024-07-15 22:42:59.138849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.202 [2024-07-15 22:42:59.139322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.202 [2024-07-15 22:42:59.139338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.202 [2024-07-15 22:42:59.139345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.202 [2024-07-15 22:42:59.139516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.202 [2024-07-15 22:42:59.139688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.202 [2024-07-15 22:42:59.139695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.202 [2024-07-15 22:42:59.139701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.202 [2024-07-15 22:42:59.142532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.202 [2024-07-15 22:42:59.151853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.202 [2024-07-15 22:42:59.152324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.202 [2024-07-15 22:42:59.152372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.202 [2024-07-15 22:42:59.152394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.202 [2024-07-15 22:42:59.152972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.202 [2024-07-15 22:42:59.153460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.202 [2024-07-15 22:42:59.153468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.202 [2024-07-15 22:42:59.153475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.202 [2024-07-15 22:42:59.156232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.202 [2024-07-15 22:42:59.164922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.202 [2024-07-15 22:42:59.165379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.202 [2024-07-15 22:42:59.165394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.202 [2024-07-15 22:42:59.165401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.202 [2024-07-15 22:42:59.165579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.202 [2024-07-15 22:42:59.165755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.202 [2024-07-15 22:42:59.165763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.202 [2024-07-15 22:42:59.165769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.463 [2024-07-15 22:42:59.168632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.463 [2024-07-15 22:42:59.177854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.463 [2024-07-15 22:42:59.178328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.463 [2024-07-15 22:42:59.178344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.463 [2024-07-15 22:42:59.178351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.463 [2024-07-15 22:42:59.178530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.463 [2024-07-15 22:42:59.178693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.463 [2024-07-15 22:42:59.178700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.463 [2024-07-15 22:42:59.178706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.463 [2024-07-15 22:42:59.181399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.463 [2024-07-15 22:42:59.190773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.463 [2024-07-15 22:42:59.191218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.463 [2024-07-15 22:42:59.191238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.463 [2024-07-15 22:42:59.191245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.463 [2024-07-15 22:42:59.191431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.463 [2024-07-15 22:42:59.191603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.463 [2024-07-15 22:42:59.191610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.463 [2024-07-15 22:42:59.191616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.463 [2024-07-15 22:42:59.194327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.463 [2024-07-15 22:42:59.203701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.463 [2024-07-15 22:42:59.204165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.463 [2024-07-15 22:42:59.204180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.463 [2024-07-15 22:42:59.204187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.463 [2024-07-15 22:42:59.204365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.463 [2024-07-15 22:42:59.204536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.463 [2024-07-15 22:42:59.204544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.463 [2024-07-15 22:42:59.204550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.463 [2024-07-15 22:42:59.207221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.463 [2024-07-15 22:42:59.216558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.463 [2024-07-15 22:42:59.217045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.463 [2024-07-15 22:42:59.217087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.463 [2024-07-15 22:42:59.217107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.463 [2024-07-15 22:42:59.217654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.463 [2024-07-15 22:42:59.217826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.463 [2024-07-15 22:42:59.217833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.463 [2024-07-15 22:42:59.217842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.463 [2024-07-15 22:42:59.220511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.463 [2024-07-15 22:42:59.229476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.463 [2024-07-15 22:42:59.229919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.463 [2024-07-15 22:42:59.229934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.463 [2024-07-15 22:42:59.229940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.463 [2024-07-15 22:42:59.230102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.463 [2024-07-15 22:42:59.230287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.463 [2024-07-15 22:42:59.230295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.463 [2024-07-15 22:42:59.230302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.463 [2024-07-15 22:42:59.232973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.463 [2024-07-15 22:42:59.242287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.463 [2024-07-15 22:42:59.242768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.463 [2024-07-15 22:42:59.242809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.463 [2024-07-15 22:42:59.242830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.463 [2024-07-15 22:42:59.243425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.463 [2024-07-15 22:42:59.243598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.463 [2024-07-15 22:42:59.243605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.463 [2024-07-15 22:42:59.243611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.463 [2024-07-15 22:42:59.246284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.463 [2024-07-15 22:42:59.255205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.463 [2024-07-15 22:42:59.255650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.463 [2024-07-15 22:42:59.255665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.463 [2024-07-15 22:42:59.255671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.463 [2024-07-15 22:42:59.255833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.463 [2024-07-15 22:42:59.255996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.463 [2024-07-15 22:42:59.256003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.463 [2024-07-15 22:42:59.256009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.463 [2024-07-15 22:42:59.258754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.463 [2024-07-15 22:42:59.268172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.463 [2024-07-15 22:42:59.268645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.463 [2024-07-15 22:42:59.268686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.463 [2024-07-15 22:42:59.268707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.463 [2024-07-15 22:42:59.269298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.463 [2024-07-15 22:42:59.269499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.463 [2024-07-15 22:42:59.269506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.463 [2024-07-15 22:42:59.269512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.463 [2024-07-15 22:42:59.272187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.463 [2024-07-15 22:42:59.280963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.463 [2024-07-15 22:42:59.281405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.464 [2024-07-15 22:42:59.281420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.464 [2024-07-15 22:42:59.281426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.464 [2024-07-15 22:42:59.281588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.464 [2024-07-15 22:42:59.281751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.464 [2024-07-15 22:42:59.281757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.464 [2024-07-15 22:42:59.281763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.464 [2024-07-15 22:42:59.284457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.464 [2024-07-15 22:42:59.293830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.464 [2024-07-15 22:42:59.294304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.464 [2024-07-15 22:42:59.294347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.464 [2024-07-15 22:42:59.294368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.464 [2024-07-15 22:42:59.294638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.464 [2024-07-15 22:42:59.294810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.464 [2024-07-15 22:42:59.294818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.464 [2024-07-15 22:42:59.294824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.464 [2024-07-15 22:42:59.297613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.464 [2024-07-15 22:42:59.306712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.464 [2024-07-15 22:42:59.307154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.464 [2024-07-15 22:42:59.307168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.464 [2024-07-15 22:42:59.307174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.464 [2024-07-15 22:42:59.307363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.464 [2024-07-15 22:42:59.307539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.464 [2024-07-15 22:42:59.307546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.464 [2024-07-15 22:42:59.307552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.464 [2024-07-15 22:42:59.310223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.464 [2024-07-15 22:42:59.319562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.464 [2024-07-15 22:42:59.319953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.464 [2024-07-15 22:42:59.319968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.464 [2024-07-15 22:42:59.319975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.464 [2024-07-15 22:42:59.320146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.464 [2024-07-15 22:42:59.320324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.464 [2024-07-15 22:42:59.320332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.464 [2024-07-15 22:42:59.320338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.464 [2024-07-15 22:42:59.323013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.464 [2024-07-15 22:42:59.332355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.464 [2024-07-15 22:42:59.332836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.464 [2024-07-15 22:42:59.332878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.464 [2024-07-15 22:42:59.332898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.464 [2024-07-15 22:42:59.333491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.464 [2024-07-15 22:42:59.333941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.464 [2024-07-15 22:42:59.333949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.464 [2024-07-15 22:42:59.333955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 153906 Killed "${NVMF_APP[@]}" "$@" 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.464 [2024-07-15 22:42:59.336780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=155313 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 155313 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 155313 ']' 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.464 22:42:59 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:35.464 [2024-07-15 22:42:59.345488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.464 [2024-07-15 22:42:59.345967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.464 [2024-07-15 22:42:59.345983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.464 [2024-07-15 22:42:59.345990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.464 [2024-07-15 22:42:59.346166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.464 [2024-07-15 22:42:59.346351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.464 [2024-07-15 22:42:59.346359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.464 [2024-07-15 22:42:59.346366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.464 [2024-07-15 22:42:59.349197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.464 [2024-07-15 22:42:59.358576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.464 [2024-07-15 22:42:59.359029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.464 [2024-07-15 22:42:59.359044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.464 [2024-07-15 22:42:59.359051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.464 [2024-07-15 22:42:59.359233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.464 [2024-07-15 22:42:59.359411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.464 [2024-07-15 22:42:59.359418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.464 [2024-07-15 22:42:59.359425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.464 [2024-07-15 22:42:59.362258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.464 [2024-07-15 22:42:59.371553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.464 [2024-07-15 22:42:59.371951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.464 [2024-07-15 22:42:59.371966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.464 [2024-07-15 22:42:59.371973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.464 [2024-07-15 22:42:59.372144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.464 [2024-07-15 22:42:59.372321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.464 [2024-07-15 22:42:59.372330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.464 [2024-07-15 22:42:59.372336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.464 [2024-07-15 22:42:59.375121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.464 [2024-07-15 22:42:59.384552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.464 [2024-07-15 22:42:59.385003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.464 [2024-07-15 22:42:59.385018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.464 [2024-07-15 22:42:59.385025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.464 [2024-07-15 22:42:59.385196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.464 [2024-07-15 22:42:59.385393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.464 [2024-07-15 22:42:59.385401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.464 [2024-07-15 22:42:59.385408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.464 [2024-07-15 22:42:59.387101] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:26:35.465 [2024-07-15 22:42:59.387139] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.465 [2024-07-15 22:42:59.388194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.465 [2024-07-15 22:42:59.397615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.465 [2024-07-15 22:42:59.398075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.465 [2024-07-15 22:42:59.398090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.465 [2024-07-15 22:42:59.398097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.465 [2024-07-15 22:42:59.398279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.465 [2024-07-15 22:42:59.398456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.465 [2024-07-15 22:42:59.398464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.465 [2024-07-15 22:42:59.398470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.465 [2024-07-15 22:42:59.401302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.465 [2024-07-15 22:42:59.410667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.465 [2024-07-15 22:42:59.411159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.465 [2024-07-15 22:42:59.411174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.465 [2024-07-15 22:42:59.411182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.465 [2024-07-15 22:42:59.411364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.465 [2024-07-15 22:42:59.411541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.465 [2024-07-15 22:42:59.411549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.465 [2024-07-15 22:42:59.411555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.465 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.465 [2024-07-15 22:42:59.414497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.465 [2024-07-15 22:42:59.423843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.465 [2024-07-15 22:42:59.424355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.465 [2024-07-15 22:42:59.424371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.465 [2024-07-15 22:42:59.424379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.465 [2024-07-15 22:42:59.424555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.465 [2024-07-15 22:42:59.424733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.465 [2024-07-15 22:42:59.424741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.465 [2024-07-15 22:42:59.424747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.465 [2024-07-15 22:42:59.427599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.724 [2024-07-15 22:42:59.436931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.724 [2024-07-15 22:42:59.437270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.724 [2024-07-15 22:42:59.437286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.725 [2024-07-15 22:42:59.437293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.725 [2024-07-15 22:42:59.437470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.725 [2024-07-15 22:42:59.437647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.725 [2024-07-15 22:42:59.437655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.725 [2024-07-15 22:42:59.437661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.725 [2024-07-15 22:42:59.440507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.725 [2024-07-15 22:42:59.446007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:35.725 [2024-07-15 22:42:59.449977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.725 [2024-07-15 22:42:59.450458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.725 [2024-07-15 22:42:59.450474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.725 [2024-07-15 22:42:59.450482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.725 [2024-07-15 22:42:59.450654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.725 [2024-07-15 22:42:59.450826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.725 [2024-07-15 22:42:59.450835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.725 [2024-07-15 22:42:59.450842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.725 [2024-07-15 22:42:59.453596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.725 [2024-07-15 22:42:59.463004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.725 [2024-07-15 22:42:59.463462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.725 [2024-07-15 22:42:59.463478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.725 [2024-07-15 22:42:59.463490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.725 [2024-07-15 22:42:59.463661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.725 [2024-07-15 22:42:59.463834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.725 [2024-07-15 22:42:59.463842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.725 [2024-07-15 22:42:59.463848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.725 [2024-07-15 22:42:59.466652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.725 [2024-07-15 22:42:59.476092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.725 [2024-07-15 22:42:59.476582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.725 [2024-07-15 22:42:59.476598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.725 [2024-07-15 22:42:59.476605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.725 [2024-07-15 22:42:59.476777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.725 [2024-07-15 22:42:59.476949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.725 [2024-07-15 22:42:59.476957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.725 [2024-07-15 22:42:59.476964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.725 [2024-07-15 22:42:59.479771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.725 [2024-07-15 22:42:59.489178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.725 [2024-07-15 22:42:59.489696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.725 [2024-07-15 22:42:59.489717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.725 [2024-07-15 22:42:59.489726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.725 [2024-07-15 22:42:59.489900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.725 [2024-07-15 22:42:59.490073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.725 [2024-07-15 22:42:59.490081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.725 [2024-07-15 22:42:59.490089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.725 [2024-07-15 22:42:59.492898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.725 [2024-07-15 22:42:59.502155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.725 [2024-07-15 22:42:59.502658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.725 [2024-07-15 22:42:59.502675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.725 [2024-07-15 22:42:59.502683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.725 [2024-07-15 22:42:59.502860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.725 [2024-07-15 22:42:59.503038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.725 [2024-07-15 22:42:59.503055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.725 [2024-07-15 22:42:59.503062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.725 [2024-07-15 22:42:59.505836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.725 [2024-07-15 22:42:59.515207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.725 [2024-07-15 22:42:59.515607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.725 [2024-07-15 22:42:59.515623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.725 [2024-07-15 22:42:59.515631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.725 [2024-07-15 22:42:59.515807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.725 [2024-07-15 22:42:59.515983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.725 [2024-07-15 22:42:59.515991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.725 [2024-07-15 22:42:59.515998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.725 [2024-07-15 22:42:59.518830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.725 [2024-07-15 22:42:59.521208] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.725 [2024-07-15 22:42:59.521238] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.725 [2024-07-15 22:42:59.521245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.725 [2024-07-15 22:42:59.521268] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.725 [2024-07-15 22:42:59.521273] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.725 [2024-07-15 22:42:59.521315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.725 [2024-07-15 22:42:59.521519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.725 [2024-07-15 22:42:59.521521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.725 [2024-07-15 22:42:59.528386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.725 [2024-07-15 22:42:59.528916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.725 [2024-07-15 22:42:59.528936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.725 [2024-07-15 22:42:59.528945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.725 [2024-07-15 22:42:59.529124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.725 [2024-07-15 22:42:59.529309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.725 [2024-07-15 22:42:59.529318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.725 [2024-07-15 22:42:59.529325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.725 [2024-07-15 22:42:59.532153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.725 [2024-07-15 22:42:59.541529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.725 [2024-07-15 22:42:59.542013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.725 [2024-07-15 22:42:59.542033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.725 [2024-07-15 22:42:59.542046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.725 [2024-07-15 22:42:59.542223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.725 [2024-07-15 22:42:59.542409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.725 [2024-07-15 22:42:59.542417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.725 [2024-07-15 22:42:59.542425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.725 [2024-07-15 22:42:59.545259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.725 [2024-07-15 22:42:59.554628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.725 [2024-07-15 22:42:59.555110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.725 [2024-07-15 22:42:59.555130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.725 [2024-07-15 22:42:59.555137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.725 [2024-07-15 22:42:59.555321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.725 [2024-07-15 22:42:59.555502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.725 [2024-07-15 22:42:59.555510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.725 [2024-07-15 22:42:59.555517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.726 [2024-07-15 22:42:59.558350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.726 [2024-07-15 22:42:59.567710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.726 [2024-07-15 22:42:59.568196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.726 [2024-07-15 22:42:59.568215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.726 [2024-07-15 22:42:59.568223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.726 [2024-07-15 22:42:59.568408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.726 [2024-07-15 22:42:59.568587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.726 [2024-07-15 22:42:59.568596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.726 [2024-07-15 22:42:59.568604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.726 [2024-07-15 22:42:59.571444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.726 [2024-07-15 22:42:59.580818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.726 [2024-07-15 22:42:59.581263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.726 [2024-07-15 22:42:59.581282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.726 [2024-07-15 22:42:59.581290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.726 [2024-07-15 22:42:59.581469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.726 [2024-07-15 22:42:59.581647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.726 [2024-07-15 22:42:59.581661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.726 [2024-07-15 22:42:59.581668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.726 [2024-07-15 22:42:59.584503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.726 [2024-07-15 22:42:59.593869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.726 [2024-07-15 22:42:59.594332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.726 [2024-07-15 22:42:59.594349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.726 [2024-07-15 22:42:59.594356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.726 [2024-07-15 22:42:59.594534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.726 [2024-07-15 22:42:59.594711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.726 [2024-07-15 22:42:59.594720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.726 [2024-07-15 22:42:59.594727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.726 [2024-07-15 22:42:59.597561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.726 [2024-07-15 22:42:59.606918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.726 [2024-07-15 22:42:59.607396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.726 [2024-07-15 22:42:59.607413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.726 [2024-07-15 22:42:59.607420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.726 [2024-07-15 22:42:59.607597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.726 [2024-07-15 22:42:59.607775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.726 [2024-07-15 22:42:59.607783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.726 [2024-07-15 22:42:59.607790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.726 [2024-07-15 22:42:59.610624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.726 [2024-07-15 22:42:59.619987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.726 [2024-07-15 22:42:59.620442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.726 [2024-07-15 22:42:59.620459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.726 [2024-07-15 22:42:59.620466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.726 [2024-07-15 22:42:59.620643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.726 [2024-07-15 22:42:59.620821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.726 [2024-07-15 22:42:59.620830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.726 [2024-07-15 22:42:59.620836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.726 [2024-07-15 22:42:59.623675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.726 [2024-07-15 22:42:59.633046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.726 [2024-07-15 22:42:59.633485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.726 [2024-07-15 22:42:59.633502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.726 [2024-07-15 22:42:59.633508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.726 [2024-07-15 22:42:59.633685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.726 [2024-07-15 22:42:59.633863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.726 [2024-07-15 22:42:59.633871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.726 [2024-07-15 22:42:59.633878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.726 [2024-07-15 22:42:59.636712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.726 [2024-07-15 22:42:59.646242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.726 [2024-07-15 22:42:59.646726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.726 [2024-07-15 22:42:59.646742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.726 [2024-07-15 22:42:59.646748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.726 [2024-07-15 22:42:59.646925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.726 [2024-07-15 22:42:59.647102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.726 [2024-07-15 22:42:59.647111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.726 [2024-07-15 22:42:59.647117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.726 [2024-07-15 22:42:59.649967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.726 [2024-07-15 22:42:59.659326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.726 [2024-07-15 22:42:59.659787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.726 [2024-07-15 22:42:59.659804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.726 [2024-07-15 22:42:59.659811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.726 [2024-07-15 22:42:59.659987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.726 [2024-07-15 22:42:59.660164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.726 [2024-07-15 22:42:59.660172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.726 [2024-07-15 22:42:59.660178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.726 [2024-07-15 22:42:59.663011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.726 [2024-07-15 22:42:59.672374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.726 [2024-07-15 22:42:59.672793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.726 [2024-07-15 22:42:59.672809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.726 [2024-07-15 22:42:59.672816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.726 [2024-07-15 22:42:59.672996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.726 [2024-07-15 22:42:59.673173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.726 [2024-07-15 22:42:59.673181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.726 [2024-07-15 22:42:59.673188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.726 [2024-07-15 22:42:59.676031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.726 [2024-07-15 22:42:59.685553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.726 [2024-07-15 22:42:59.686006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.726 [2024-07-15 22:42:59.686022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.726 [2024-07-15 22:42:59.686029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.726 [2024-07-15 22:42:59.686205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.726 [2024-07-15 22:42:59.686387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.726 [2024-07-15 22:42:59.686396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.726 [2024-07-15 22:42:59.686402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.726 [2024-07-15 22:42:59.689230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.986 [2024-07-15 22:42:59.698751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.986 [2024-07-15 22:42:59.699160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.986 [2024-07-15 22:42:59.699176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.986 [2024-07-15 22:42:59.699183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.986 [2024-07-15 22:42:59.699364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.986 [2024-07-15 22:42:59.699541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.986 [2024-07-15 22:42:59.699549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.986 [2024-07-15 22:42:59.699556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.986 [2024-07-15 22:42:59.702386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.986 [2024-07-15 22:42:59.711902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.986 [2024-07-15 22:42:59.712340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.986 [2024-07-15 22:42:59.712356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.986 [2024-07-15 22:42:59.712362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.986 [2024-07-15 22:42:59.712540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.986 [2024-07-15 22:42:59.712718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.986 [2024-07-15 22:42:59.712726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.986 [2024-07-15 22:42:59.712737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.986 [2024-07-15 22:42:59.715572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.986 [2024-07-15 22:42:59.725093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.986 [2024-07-15 22:42:59.725532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.986 [2024-07-15 22:42:59.725548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.986 [2024-07-15 22:42:59.725555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.986 [2024-07-15 22:42:59.725732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.986 [2024-07-15 22:42:59.725909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.986 [2024-07-15 22:42:59.725917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.987 [2024-07-15 22:42:59.725924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.987 [2024-07-15 22:42:59.728753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.987 [2024-07-15 22:42:59.738275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.987 [2024-07-15 22:42:59.738732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.987 [2024-07-15 22:42:59.738748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.987 [2024-07-15 22:42:59.738755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.987 [2024-07-15 22:42:59.738931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.987 [2024-07-15 22:42:59.739108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.987 [2024-07-15 22:42:59.739116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.987 [2024-07-15 22:42:59.739123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.987 [2024-07-15 22:42:59.741950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.987 [2024-07-15 22:42:59.751466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.987 [2024-07-15 22:42:59.751927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.987 [2024-07-15 22:42:59.751942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.987 [2024-07-15 22:42:59.751949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.987 [2024-07-15 22:42:59.752126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.987 [2024-07-15 22:42:59.752308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.987 [2024-07-15 22:42:59.752316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.987 [2024-07-15 22:42:59.752323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.987 [2024-07-15 22:42:59.755151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.987 [2024-07-15 22:42:59.764511] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.987 [2024-07-15 22:42:59.764970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.987 [2024-07-15 22:42:59.764985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.987 [2024-07-15 22:42:59.764992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.987 [2024-07-15 22:42:59.765169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.987 [2024-07-15 22:42:59.765351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.987 [2024-07-15 22:42:59.765360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.987 [2024-07-15 22:42:59.765366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.987 [2024-07-15 22:42:59.768190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.987 [2024-07-15 22:42:59.777553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.987 [2024-07-15 22:42:59.777942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.987 [2024-07-15 22:42:59.777958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.987 [2024-07-15 22:42:59.777965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.987 [2024-07-15 22:42:59.778143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.987 [2024-07-15 22:42:59.778323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.987 [2024-07-15 22:42:59.778332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.987 [2024-07-15 22:42:59.778339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.987 [2024-07-15 22:42:59.781165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.987 [2024-07-15 22:42:59.790678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.987 [2024-07-15 22:42:59.791132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.987 [2024-07-15 22:42:59.791148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.987 [2024-07-15 22:42:59.791155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.987 [2024-07-15 22:42:59.791336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.987 [2024-07-15 22:42:59.791513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.987 [2024-07-15 22:42:59.791521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.987 [2024-07-15 22:42:59.791527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.987 [2024-07-15 22:42:59.794358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.987 [2024-07-15 22:42:59.803716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.987 [2024-07-15 22:42:59.804172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.987 [2024-07-15 22:42:59.804187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.987 [2024-07-15 22:42:59.804194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.987 [2024-07-15 22:42:59.804374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.987 [2024-07-15 22:42:59.804555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.987 [2024-07-15 22:42:59.804563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.987 [2024-07-15 22:42:59.804570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.987 [2024-07-15 22:42:59.807399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.987 [2024-07-15 22:42:59.816783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.987 [2024-07-15 22:42:59.817242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.987 [2024-07-15 22:42:59.817258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.987 [2024-07-15 22:42:59.817265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.987 [2024-07-15 22:42:59.817442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.987 [2024-07-15 22:42:59.817619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.987 [2024-07-15 22:42:59.817627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.987 [2024-07-15 22:42:59.817634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.987 [2024-07-15 22:42:59.820467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.987 [2024-07-15 22:42:59.829818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.987 [2024-07-15 22:42:59.830280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.987 [2024-07-15 22:42:59.830296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.987 [2024-07-15 22:42:59.830303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.987 [2024-07-15 22:42:59.830480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.987 [2024-07-15 22:42:59.830658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.987 [2024-07-15 22:42:59.830666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.987 [2024-07-15 22:42:59.830672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.987 [2024-07-15 22:42:59.833520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.987 [2024-07-15 22:42:59.842875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.987 [2024-07-15 22:42:59.843332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.987 [2024-07-15 22:42:59.843349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.987 [2024-07-15 22:42:59.843356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.987 [2024-07-15 22:42:59.843534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.987 [2024-07-15 22:42:59.843711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.987 [2024-07-15 22:42:59.843719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.987 [2024-07-15 22:42:59.843725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.987 [2024-07-15 22:42:59.846555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.987 [2024-07-15 22:42:59.856074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.987 [2024-07-15 22:42:59.856534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.987 [2024-07-15 22:42:59.856550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.987 [2024-07-15 22:42:59.856557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.987 [2024-07-15 22:42:59.856734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.987 [2024-07-15 22:42:59.856911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.987 [2024-07-15 22:42:59.856919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.987 [2024-07-15 22:42:59.856925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.987 [2024-07-15 22:42:59.859753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.987 [2024-07-15 22:42:59.869272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.987 [2024-07-15 22:42:59.869706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.987 [2024-07-15 22:42:59.869722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.987 [2024-07-15 22:42:59.869729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.987 [2024-07-15 22:42:59.869906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.987 [2024-07-15 22:42:59.870083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.988 [2024-07-15 22:42:59.870091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.988 [2024-07-15 22:42:59.870098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.988 [2024-07-15 22:42:59.872930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.988 [2024-07-15 22:42:59.882466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.988 [2024-07-15 22:42:59.882839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.988 [2024-07-15 22:42:59.882855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.988 [2024-07-15 22:42:59.882862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.988 [2024-07-15 22:42:59.883039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.988 [2024-07-15 22:42:59.883216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.988 [2024-07-15 22:42:59.883228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.988 [2024-07-15 22:42:59.883235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.988 [2024-07-15 22:42:59.886059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.988 [2024-07-15 22:42:59.895568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.988 [2024-07-15 22:42:59.896021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.988 [2024-07-15 22:42:59.896036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.988 [2024-07-15 22:42:59.896046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.988 [2024-07-15 22:42:59.896222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.988 [2024-07-15 22:42:59.896405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.988 [2024-07-15 22:42:59.896413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.988 [2024-07-15 22:42:59.896419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.988 [2024-07-15 22:42:59.899250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.988 [2024-07-15 22:42:59.908767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.988 [2024-07-15 22:42:59.909199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.988 [2024-07-15 22:42:59.909215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.988 [2024-07-15 22:42:59.909222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.988 [2024-07-15 22:42:59.909403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.988 [2024-07-15 22:42:59.909580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.988 [2024-07-15 22:42:59.909587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.988 [2024-07-15 22:42:59.909594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.988 [2024-07-15 22:42:59.912424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.988 [2024-07-15 22:42:59.921942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.988 [2024-07-15 22:42:59.922424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.988 [2024-07-15 22:42:59.922439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.988 [2024-07-15 22:42:59.922446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.988 [2024-07-15 22:42:59.922624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.988 [2024-07-15 22:42:59.922801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.988 [2024-07-15 22:42:59.922809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.988 [2024-07-15 22:42:59.922816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.988 [2024-07-15 22:42:59.925647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.988 [2024-07-15 22:42:59.934996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.988 [2024-07-15 22:42:59.935448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.988 [2024-07-15 22:42:59.935464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.988 [2024-07-15 22:42:59.935471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.988 [2024-07-15 22:42:59.935648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.988 [2024-07-15 22:42:59.935825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.988 [2024-07-15 22:42:59.935836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.988 [2024-07-15 22:42:59.935842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.988 [2024-07-15 22:42:59.938673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.988 [2024-07-15 22:42:59.948194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.988 [2024-07-15 22:42:59.948674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.988 [2024-07-15 22:42:59.948690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:35.988 [2024-07-15 22:42:59.948697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:35.988 [2024-07-15 22:42:59.948874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:35.988 [2024-07-15 22:42:59.949051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.988 [2024-07-15 22:42:59.949059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.988 [2024-07-15 22:42:59.949066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.988 [2024-07-15 22:42:59.951895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.247 [2024-07-15 22:42:59.961307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.247 [2024-07-15 22:42:59.961769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.247 [2024-07-15 22:42:59.961787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.247 [2024-07-15 22:42:59.961795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.247 [2024-07-15 22:42:59.961972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.247 [2024-07-15 22:42:59.962149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.247 [2024-07-15 22:42:59.962157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.247 [2024-07-15 22:42:59.962164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.247 [2024-07-15 22:42:59.965000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.247 [2024-07-15 22:42:59.974378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.247 [2024-07-15 22:42:59.974859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.247 [2024-07-15 22:42:59.974875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.247 [2024-07-15 22:42:59.974882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.247 [2024-07-15 22:42:59.975059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.247 [2024-07-15 22:42:59.975251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.247 [2024-07-15 22:42:59.975260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.247 [2024-07-15 22:42:59.975267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.247 [2024-07-15 22:42:59.978094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.247 [2024-07-15 22:42:59.987474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.247 [2024-07-15 22:42:59.987904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.247 [2024-07-15 22:42:59.987920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.247 [2024-07-15 22:42:59.987927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.247 [2024-07-15 22:42:59.988104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.247 [2024-07-15 22:42:59.988286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.247 [2024-07-15 22:42:59.988294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.247 [2024-07-15 22:42:59.988301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.247 [2024-07-15 22:42:59.991128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.247 [2024-07-15 22:43:00.000666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.248 [2024-07-15 22:43:00.001112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.248 [2024-07-15 22:43:00.001129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.248 [2024-07-15 22:43:00.001136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.248 [2024-07-15 22:43:00.001318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.248 [2024-07-15 22:43:00.001496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.248 [2024-07-15 22:43:00.001504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.248 [2024-07-15 22:43:00.001510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.248 [2024-07-15 22:43:00.004399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.248 [2024-07-15 22:43:00.013915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.248 [2024-07-15 22:43:00.014436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.248 [2024-07-15 22:43:00.014454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.248 [2024-07-15 22:43:00.014463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.248 [2024-07-15 22:43:00.014641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.248 [2024-07-15 22:43:00.014819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.248 [2024-07-15 22:43:00.014826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.248 [2024-07-15 22:43:00.014833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.248 [2024-07-15 22:43:00.017670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.248 [2024-07-15 22:43:00.027044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.248 [2024-07-15 22:43:00.027452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.248 [2024-07-15 22:43:00.027470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.248 [2024-07-15 22:43:00.027477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.248 [2024-07-15 22:43:00.027658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.248 [2024-07-15 22:43:00.027835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.248 [2024-07-15 22:43:00.027843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.248 [2024-07-15 22:43:00.027850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.248 [2024-07-15 22:43:00.030685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.248 [2024-07-15 22:43:00.040232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.248 [2024-07-15 22:43:00.040712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.248 [2024-07-15 22:43:00.040729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.248 [2024-07-15 22:43:00.040737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.248 [2024-07-15 22:43:00.040914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.248 [2024-07-15 22:43:00.041092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.248 [2024-07-15 22:43:00.041099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.248 [2024-07-15 22:43:00.041105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.248 [2024-07-15 22:43:00.043942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.248 [2024-07-15 22:43:00.054831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.248 [2024-07-15 22:43:00.055469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.248 [2024-07-15 22:43:00.055495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.248 [2024-07-15 22:43:00.055506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.248 [2024-07-15 22:43:00.055739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.248 [2024-07-15 22:43:00.055941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.248 [2024-07-15 22:43:00.055950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.248 [2024-07-15 22:43:00.055957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.248 [2024-07-15 22:43:00.058862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.248 [2024-07-15 22:43:00.068269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.248 [2024-07-15 22:43:00.071348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.248 [2024-07-15 22:43:00.071366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.248 [2024-07-15 22:43:00.071374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.248 [2024-07-15 22:43:00.071552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.248 [2024-07-15 22:43:00.071731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.248 [2024-07-15 22:43:00.071740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.248 [2024-07-15 22:43:00.071751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.248 [2024-07-15 22:43:00.074593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.248 [2024-07-15 22:43:00.081463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.248 [2024-07-15 22:43:00.081808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.248 [2024-07-15 22:43:00.081824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.248 [2024-07-15 22:43:00.081831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.248 [2024-07-15 22:43:00.082008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.248 [2024-07-15 22:43:00.082186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.248 [2024-07-15 22:43:00.082195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.248 [2024-07-15 22:43:00.082202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.248 [2024-07-15 22:43:00.085039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.248 [2024-07-15 22:43:00.094575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.248 [2024-07-15 22:43:00.094986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.248 [2024-07-15 22:43:00.095002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.248 [2024-07-15 22:43:00.095009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.248 [2024-07-15 22:43:00.095186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.248 [2024-07-15 22:43:00.095369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.248 [2024-07-15 22:43:00.095378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.248 [2024-07-15 22:43:00.095385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.248 [2024-07-15 22:43:00.098212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.248 [2024-07-15 22:43:00.107750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.248 [2024-07-15 22:43:00.108083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.248 [2024-07-15 22:43:00.108100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.248 [2024-07-15 22:43:00.108106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.248 [2024-07-15 22:43:00.108289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.248 [2024-07-15 22:43:00.108466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.248 [2024-07-15 22:43:00.108474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.248 [2024-07-15 22:43:00.108480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.249 [2024-07-15 22:43:00.111317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.249 [2024-07-15 22:43:00.120855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.249 [2024-07-15 22:43:00.121319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.249 [2024-07-15 22:43:00.121335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.249 [2024-07-15 22:43:00.121342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.249 [2024-07-15 22:43:00.121520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.249 [2024-07-15 22:43:00.121698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.249 [2024-07-15 22:43:00.121706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.249 [2024-07-15 22:43:00.121713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.249 [2024-07-15 22:43:00.124552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.249 [2024-07-15 22:43:00.133917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.249 [2024-07-15 22:43:00.134262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.249 [2024-07-15 22:43:00.134279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.249 [2024-07-15 22:43:00.134287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.249 [2024-07-15 22:43:00.134464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.249 [2024-07-15 22:43:00.134642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.249 [2024-07-15 22:43:00.134651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.249 [2024-07-15 22:43:00.134659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.249 [2024-07-15 22:43:00.137494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.249 [2024-07-15 22:43:00.147028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.249 [2024-07-15 22:43:00.147489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.249 [2024-07-15 22:43:00.147506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.249 [2024-07-15 22:43:00.147513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.249 [2024-07-15 22:43:00.147689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.249 [2024-07-15 22:43:00.147866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.249 [2024-07-15 22:43:00.147874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.249 [2024-07-15 22:43:00.147881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.249 [2024-07-15 22:43:00.150716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.249 [2024-07-15 22:43:00.160084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.249 [2024-07-15 22:43:00.160495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.249 [2024-07-15 22:43:00.160511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.249 [2024-07-15 22:43:00.160519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.249 [2024-07-15 22:43:00.160696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.249 [2024-07-15 22:43:00.160881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.249 [2024-07-15 22:43:00.160889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.249 [2024-07-15 22:43:00.160895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.249 [2024-07-15 22:43:00.163733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.249 [2024-07-15 22:43:00.173276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.249 [2024-07-15 22:43:00.173673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.249 [2024-07-15 22:43:00.173689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.249 [2024-07-15 22:43:00.173696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.249 [2024-07-15 22:43:00.173872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.249 [2024-07-15 22:43:00.174050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.249 [2024-07-15 22:43:00.174058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.249 [2024-07-15 22:43:00.174064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.249 [2024-07-15 22:43:00.176911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.249 [2024-07-15 22:43:00.186447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.249 [2024-07-15 22:43:00.186836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.249 [2024-07-15 22:43:00.186852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.249 [2024-07-15 22:43:00.186859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.249 [2024-07-15 22:43:00.187037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.249 [2024-07-15 22:43:00.187214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.249 [2024-07-15 22:43:00.187222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.249 [2024-07-15 22:43:00.187234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.249 [2024-07-15 22:43:00.190060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.249 [2024-07-15 22:43:00.199597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.249 [2024-07-15 22:43:00.200016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.249 [2024-07-15 22:43:00.200031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.249 [2024-07-15 22:43:00.200038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.249 [2024-07-15 22:43:00.200215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.249 [2024-07-15 22:43:00.200396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.249 [2024-07-15 22:43:00.200405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.249 [2024-07-15 22:43:00.200412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.249 [2024-07-15 22:43:00.203251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.249 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.249 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:36.249 22:43:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:36.249 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:36.249 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.249 [2024-07-15 22:43:00.212781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.249 [2024-07-15 22:43:00.213122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.249 [2024-07-15 22:43:00.213137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.249 [2024-07-15 22:43:00.213143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.249 [2024-07-15 22:43:00.213326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.249 [2024-07-15 22:43:00.213502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.249 [2024-07-15 22:43:00.213510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.249 [2024-07-15 22:43:00.213516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.249 [2024-07-15 22:43:00.216348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.509 [2024-07-15 22:43:00.225885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.509 [2024-07-15 22:43:00.226234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.509 [2024-07-15 22:43:00.226250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.509 [2024-07-15 22:43:00.226258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.509 [2024-07-15 22:43:00.226434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.509 [2024-07-15 22:43:00.226611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.509 [2024-07-15 22:43:00.226620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.509 [2024-07-15 22:43:00.226626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.509 [2024-07-15 22:43:00.229459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.509 [2024-07-15 22:43:00.238987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.509 [2024-07-15 22:43:00.239442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.509 [2024-07-15 22:43:00.239460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.509 [2024-07-15 22:43:00.239467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.509 [2024-07-15 22:43:00.239643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.509 [2024-07-15 22:43:00.239824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.509 [2024-07-15 22:43:00.239835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.509 [2024-07-15 22:43:00.239847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.509 [2024-07-15 22:43:00.242686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.509 [2024-07-15 22:43:00.246145] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.509 [2024-07-15 22:43:00.252052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.509 [2024-07-15 22:43:00.252390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.509 [2024-07-15 22:43:00.252406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.509 [2024-07-15 22:43:00.252413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.509 [2024-07-15 22:43:00.252589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.509 [2024-07-15 22:43:00.252766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.509 [2024-07-15 22:43:00.252774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.509 [2024-07-15 22:43:00.252781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.509 [2024-07-15 22:43:00.255615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.509 [2024-07-15 22:43:00.265194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.509 [2024-07-15 22:43:00.265527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.509 [2024-07-15 22:43:00.265544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.509 [2024-07-15 22:43:00.265550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.509 [2024-07-15 22:43:00.265727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.509 [2024-07-15 22:43:00.265905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.509 [2024-07-15 22:43:00.265914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.509 [2024-07-15 22:43:00.265920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.509 [2024-07-15 22:43:00.268758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.509 [2024-07-15 22:43:00.278319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.509 [2024-07-15 22:43:00.278736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.509 [2024-07-15 22:43:00.278754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.509 [2024-07-15 22:43:00.278761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.509 [2024-07-15 22:43:00.278940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.509 [2024-07-15 22:43:00.279123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.509 [2024-07-15 22:43:00.279131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.509 [2024-07-15 22:43:00.279138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.509 Malloc0 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.509 [2024-07-15 22:43:00.281973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.509 [2024-07-15 22:43:00.291509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.509 [2024-07-15 22:43:00.291915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.509 [2024-07-15 22:43:00.291931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8da980 with addr=10.0.0.2, port=4420 00:26:36.509 [2024-07-15 22:43:00.291938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8da980 is same with the state(5) to be set 00:26:36.509 [2024-07-15 22:43:00.292116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8da980 (9): Bad file descriptor 00:26:36.509 [2024-07-15 22:43:00.292299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.509 [2024-07-15 22:43:00.292307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.509 [2024-07-15 22:43:00.292314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.509 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.509 [2024-07-15 22:43:00.295144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.510 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.510 22:43:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.510 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.510 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:36.510 [2024-07-15 22:43:00.304442] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.510 [2024-07-15 22:43:00.304681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.510 22:43:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.510 22:43:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 154387 00:26:36.510 [2024-07-15 22:43:00.376765] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:46.489 00:26:46.489 Latency(us) 00:26:46.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.489 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:46.489 Verification LBA range: start 0x0 length 0x4000 00:26:46.489 Nvme1n1 : 15.00 8111.69 31.69 12639.80 0.00 6148.53 658.92 19261.89 00:26:46.489 =================================================================================================================== 00:26:46.489 Total : 8111.69 31.69 12639.80 0.00 6148.53 658.92 19261.89 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.489 rmmod nvme_tcp 00:26:46.489 rmmod nvme_fabrics 00:26:46.489 rmmod nvme_keyring 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 155313 ']' 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 155313 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 155313 ']' 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 155313 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155313 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155313' 00:26:46.489 killing process with pid 155313 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 155313 00:26:46.489 22:43:08 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 155313 00:26:46.489 22:43:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:46.489 22:43:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.489 22:43:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.489 22:43:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.489 22:43:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.490 22:43:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.490 22:43:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:46.490 22:43:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.426 22:43:11 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:47.426 00:26:47.426 real 0m25.810s 00:26:47.426 user 1m2.313s 00:26:47.426 sys 0m6.031s 00:26:47.426 22:43:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:47.426 22:43:11 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:47.426 ************************************ 00:26:47.426 END TEST nvmf_bdevperf 00:26:47.426 ************************************ 00:26:47.426 22:43:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:47.426 22:43:11 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:47.426 22:43:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:47.426 22:43:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.426 22:43:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:47.426 ************************************ 00:26:47.426 START TEST nvmf_target_disconnect 00:26:47.426 ************************************ 00:26:47.426 22:43:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:47.685 * Looking for test storage... 00:26:47.685 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:47.685 22:43:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.685 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:47.685 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.685 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.685 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.685 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.685 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.685 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:47.686 22:43:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:53.015 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:53.015 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:53.015 Found net devices under 0000:86:00.0: cvl_0_0 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:53.015 Found net devices under 0000:86:00.1: cvl_0_1 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:53.015 22:43:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:53.275 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:53.275 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:53.275 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:53.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:53.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:26:53.275 00:26:53.275 --- 10.0.0.2 ping statistics --- 00:26:53.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.275 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:53.275 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:53.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:53.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:26:53.275 00:26:53.276 --- 10.0.0.1 ping statistics --- 00:26:53.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:53.276 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:53.276 ************************************ 00:26:53.276 START TEST nvmf_target_disconnect_tc1 00:26:53.276 ************************************ 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:53.276 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:53.276 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.276 [2024-07-15 22:43:17.238788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.276 [2024-07-15 22:43:17.238892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6bae60 with addr=10.0.0.2, port=4420 00:26:53.276 [2024-07-15 22:43:17.238941] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:53.276 [2024-07-15 22:43:17.238966] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:53.276 [2024-07-15 22:43:17.238985] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:53.276 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:53.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:53.536 Initializing NVMe Controllers 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:53.536 00:26:53.536 real 0m0.107s 00:26:53.536 user 0m0.049s 00:26:53.536 sys 0m0.057s 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:53.536 ************************************ 00:26:53.536 END TEST nvmf_target_disconnect_tc1 00:26:53.536 ************************************ 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:53.536 ************************************ 00:26:53.536 START TEST nvmf_target_disconnect_tc2 00:26:53.536 ************************************ 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=160351 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 160351 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 160351 ']' 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:53.536 22:43:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:53.536 [2024-07-15 22:43:17.370998] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:26:53.536 [2024-07-15 22:43:17.371049] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.536 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.536 [2024-07-15 22:43:17.440206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.795 [2024-07-15 22:43:17.521550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.795 [2024-07-15 22:43:17.521582] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.796 [2024-07-15 22:43:17.521589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.796 [2024-07-15 22:43:17.521596] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.796 [2024-07-15 22:43:17.521600] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.796 [2024-07-15 22:43:17.521708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:53.796 [2024-07-15 22:43:17.521822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:53.796 [2024-07-15 22:43:17.521865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:53.796 [2024-07-15 22:43:17.521866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.364 Malloc0 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.364 [2024-07-15 22:43:18.227071] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.364 [2024-07-15 22:43:18.252051] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=160509 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:54.364 22:43:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:54.364 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.919 22:43:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 160351 00:26:56.919 22:43:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Write completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.919 starting I/O failed 00:26:56.919 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 [2024-07-15 22:43:20.278368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 [2024-07-15 22:43:20.278571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 [2024-07-15 22:43:20.278768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Write completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.920 starting I/O failed 00:26:56.920 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 Read completed with error (sct=0, sc=8) 00:26:56.921 starting I/O failed 00:26:56.921 [2024-07-15 22:43:20.278961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:56.921 [2024-07-15 22:43:20.279125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.279141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.279303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.279315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.279451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.279464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.279602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.279632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.279813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.279843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.280017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.280046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.280210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.280219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.280416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.280426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.280690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.280720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.280883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.280912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.281082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.281112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.281282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.281292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.281475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.281505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.281741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.281770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.281937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.281966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.282130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.282140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.282266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.282276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.282419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.282429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.282567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.282597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.282832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.282862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.283016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.283045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.283317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.283345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.283631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.283660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.283844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.283874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.284102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.284115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.284385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.284399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.921 [2024-07-15 22:43:20.284615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.921 [2024-07-15 22:43:20.284629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.921 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.284828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.284842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.285073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.285102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.285347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.285378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.285552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.285581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.285800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.285829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.286001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.286015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.286210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.286248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.286469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.286498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.286788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.286817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.287114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.287143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.287327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.287342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.287447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.287461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.287668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.287682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.287817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.287831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.288028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.288042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.288187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.288207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.288351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.288364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.288493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.288506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.288713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.288727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.288932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.288960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.289192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.289223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.289404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.289433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.289584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.289613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.289851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.289880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.290035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.290049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.290257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.290271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.290521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.290534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.290742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.290756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.290887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.290900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.291093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.291106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.291303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.291317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.922 [2024-07-15 22:43:20.291513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.922 [2024-07-15 22:43:20.291526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.922 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.291653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.291666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.291789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.291802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.292001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.292015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.292195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.292209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.292332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.292346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.292530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.292543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.292625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.292637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.292816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.292830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.293023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.293054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.293221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.293259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.293499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.293529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.293680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.293709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.293887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.293916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.294139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.294169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.294422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.294436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.294552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.294565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.294680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.294694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.294824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.294837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.294972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.294986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.295167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.295180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.295313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.295327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.295454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.295468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.295592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.295605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.295808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.295823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.295953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.295967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.296081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.296094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.296207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.296221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.296357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.296372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.296484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.296498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.296624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.296638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.296844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.296872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.297050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.923 [2024-07-15 22:43:20.297080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.923 qpair failed and we were unable to recover it. 00:26:56.923 [2024-07-15 22:43:20.297297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.297329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.297650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.297679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.297832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.297861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.298078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.298108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.298326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.298340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.298473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.298486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.298675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.298689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.298817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.298830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.298962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.298975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.299167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.299180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.299370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.299384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.299584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.299614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.299832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.299861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.300090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.300119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.300343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.300357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.300609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.300622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.300751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.300765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.301042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.301071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.301306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.301337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.301582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.301612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.301762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.301791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.302021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.302051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.302274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.302289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.302417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.302431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.302560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.302574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.302694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.302707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.302957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.302970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.303131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.303144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.303266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.303280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.303407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.303420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.303631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.303660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.924 qpair failed and we were unable to recover it. 00:26:56.924 [2024-07-15 22:43:20.303811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.924 [2024-07-15 22:43:20.303846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.304012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.304042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.304261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.304290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.304511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.304541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.304723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.304752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.304906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.304919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.305124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.305138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.305237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.305251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.305394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.305407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.305543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.305556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.305738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.305752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.305955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.305969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.306111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.306125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.306286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.306300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.306436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.306449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.306581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.306595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.306727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.306740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.306879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.306893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.307144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.307157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.307322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.307336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.307472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.307486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.307675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.307689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.307873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.307886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.308017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.308051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.308204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.925 [2024-07-15 22:43:20.308255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.925 qpair failed and we were unable to recover it. 00:26:56.925 [2024-07-15 22:43:20.308546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.308576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.308763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.308793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.308970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.308999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.309249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.309280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.309538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.309567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.309746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.309775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.310096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.310109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.310287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.310317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.310570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.310599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.310757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.310786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.311008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.311038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.311183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.311196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.311325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.311339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.311470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.311483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.311681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.311694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.311827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.311843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.311987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.312000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.312123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.312136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.312252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.312266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.312463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.312476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.312725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.312739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.312928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.312942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.313075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.313089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.313231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.313244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.313357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.313370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.313482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.313495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.313626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.313639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.313820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.313833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.314014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.314028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.314215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.314279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.314563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.314593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.314881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.926 [2024-07-15 22:43:20.314911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.926 qpair failed and we were unable to recover it. 00:26:56.926 [2024-07-15 22:43:20.315061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.315089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.315321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.315352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.315515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.315544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.315701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.315730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.315918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.315947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.316071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.316102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.316287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.316300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.316425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.316438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.316646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.316660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.316855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.316869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.317075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.317089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.317215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.317232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.317482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.317495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.317741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.317754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.317892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.317905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.318024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.318038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.318170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.318184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.318336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.318351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.318535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.318548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.318672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.318685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.318898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.318928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.319101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.319129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.319307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.319337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.319498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.319531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.319759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.319788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.319968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.319998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.320215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.320253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.320401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.320414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.320534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.320547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.320810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.320823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.320921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.320934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.321068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.321081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.927 qpair failed and we were unable to recover it. 00:26:56.927 [2024-07-15 22:43:20.321195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.927 [2024-07-15 22:43:20.321208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.321394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.321408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.321525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.321538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.321738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.321752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.321869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.321882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.322105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.322135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.322305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.322336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.322488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.322517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.322689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.322718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.322836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.322866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.323030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.323059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.323235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.323266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.323535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.323548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.323741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.323754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.324016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.324030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.324309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.324324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.324508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.324528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.324686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.324700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.324800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.324830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.324962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.324972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.325223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.325271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.325437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.325468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.325699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.325729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.325984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.326013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.326193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.326222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.326483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.326493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.326617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.326626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.326766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.326775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.326912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.326923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.327114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.327123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.327244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.327255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.327366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.327379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.928 [2024-07-15 22:43:20.327503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.928 [2024-07-15 22:43:20.327514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.928 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.327636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.327646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.327837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.327847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.327962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.327973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.328093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.328102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.328211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.328222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.328329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.328339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.328453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.328463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.328636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.328645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.328771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.328781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.328892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.328901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.329124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.329134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.329308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.329319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.329437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.329447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.329571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.329582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.329801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.329811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.329941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.329950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.330063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.330073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.330287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.330297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.330407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.330417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.330590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.330600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.330709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.330719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.330848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.330858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.330968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.330978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.331118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.331129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.331240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.331250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.331388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.331400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.331616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.331627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.331734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.331743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.331923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.331933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.332069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.332079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.332265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.332275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.929 [2024-07-15 22:43:20.332553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.929 [2024-07-15 22:43:20.332564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.929 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.332682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.332693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.332958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.332968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.333098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.333109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.333283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.333293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.333412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.333422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.333556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.333565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.333683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.333696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.333967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.333977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.334097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.334106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.334222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.334237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.334453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.334464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.334652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.334681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.334904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.334933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.335104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.335133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.335293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.335303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.335495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.335505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.335681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.335691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.335814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.335824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.335945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.335956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.336069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.336078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.336197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.336208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.336420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.336457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.336623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.336651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.336803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.336833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.336994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.337023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.930 [2024-07-15 22:43:20.337221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.930 [2024-07-15 22:43:20.337235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.930 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.337358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.337368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.337495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.337505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.337711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.337721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.337844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.337855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.338030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.338040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.338229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.338239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.338359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.338369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.338480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.338490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.338612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.338623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.338740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.338750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.338931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.338941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.339055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.339065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.339183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.339193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.339383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.339393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.339520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.339530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.339737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.339766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.339915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.339944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.340108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.340137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.340361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.340371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.340571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.340581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.340657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.340668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.340867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.340877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.341012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.341022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.341208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.341218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.341343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.341353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.341476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.341486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.341688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.341698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.341810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.341820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.341938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.341949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.342072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.342082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.342309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.342340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.342512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.931 [2024-07-15 22:43:20.342541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.931 qpair failed and we were unable to recover it. 00:26:56.931 [2024-07-15 22:43:20.342761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.342789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.343006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.343035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.343353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.343385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.343564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.343593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.343829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.343858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.344087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.344116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.344347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.344377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.344561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.344590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.344842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.344872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.345009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.345019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.345223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.345263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.345386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.345416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.345582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.345612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.345894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.345925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.346171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.346200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.346545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.346576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.346740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.346770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.346999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.347028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.347257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.347289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.347514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.347524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.347712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.347722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.347829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.347839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.348063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.348073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.348191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.348201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.348377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.348387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.348512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.348522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.348659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.348670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.348796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.348805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.348922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.348934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.349063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.349074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.349197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.349207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.349325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.349336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.932 qpair failed and we were unable to recover it. 00:26:56.932 [2024-07-15 22:43:20.349513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.932 [2024-07-15 22:43:20.349524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.349653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.349663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.349791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.349801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.350088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.350098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.350222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.350236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.350415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.350424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.350673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.350703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.350936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.350966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.351201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.351264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.351493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.351523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.351747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.351777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.351939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.351975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.352155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.352174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.352356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.352387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.352540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.352570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.352776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.352805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.353031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.353061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.353240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.353270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.353574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.353603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.353839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.353868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.354039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.354068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.354238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.354269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.354434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.354463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.354643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.354672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.354911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.354941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.355106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.355116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.355314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.355345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.355571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.355600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.355868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.355898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.356082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.356112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.356269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.356299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.356515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.356524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.933 [2024-07-15 22:43:20.356772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.933 [2024-07-15 22:43:20.356783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.933 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.356965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.356975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.357171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.357201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.357399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.357429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.357598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.357633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.357858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.357887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.358132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.358161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.358328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.358338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.358486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.358496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.358584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.358593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.358771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.358781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.358916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.358926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.359044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.359054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.359243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.359254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.359376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.359385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.359495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.359506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.359624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.359634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.359748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.359760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.359969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.359980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.360160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.360170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.360318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.360328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.360441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.360451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.360564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.360575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.360690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.360699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.360815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.360825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.361006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.361016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.361194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.361203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.361404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.361415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.361531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.361541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.361668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.361678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.361807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.361818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.361926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.361936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.934 qpair failed and we were unable to recover it. 00:26:56.934 [2024-07-15 22:43:20.362043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.934 [2024-07-15 22:43:20.362053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.362179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.362190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.362298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.362309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.362415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.362425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.362558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.362568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.362682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.362693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.362875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.362885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.363081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.363091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.363216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.363237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.363366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.363376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.363510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.363521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.363641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.363651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.363764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.363775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.363890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.363900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.364010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.364020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.364136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.364145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.364323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.364335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.364513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.364523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.364637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.364646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.364817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.364828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.364954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.364964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.365081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.365090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.365222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.365236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.365354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.365364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.365540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.365551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.365743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.365753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.365846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.365854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.365969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.365979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.366104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.366114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.366245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.366257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.366380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.366390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.366643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.366653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.935 [2024-07-15 22:43:20.366769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.935 [2024-07-15 22:43:20.366779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.935 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.366970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.366980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.367165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.367175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.367384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.367395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.367501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.367511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.367712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.367742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.367892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.367921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.368047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.368081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.368299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.368309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.368496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.368507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.368685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.368694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.368875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.368886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.369019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.369046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.369237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.369268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.369484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.369513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.369688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.369717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.369886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.369916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.370082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.370112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.370438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.370469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.370624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.370653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.370899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.370928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.371160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.371190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.371437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.371467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.371698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.371727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.371969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.371998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.372156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.936 [2024-07-15 22:43:20.372186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.936 qpair failed and we were unable to recover it. 00:26:56.936 [2024-07-15 22:43:20.372413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.372443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.372672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.372681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.372802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.372812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.372928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.372938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.373123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.373134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.373380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.373391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.373502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.373512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.373629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.373639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.373762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.373772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.373951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.373961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.374067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.374078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.374270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.374280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.374405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.374415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.374548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.374558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.374798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.374808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.375094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.375124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.375290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.375320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.375572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.375601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.375893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.375922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.376087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.376116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.376288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.376318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.376484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.376517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.376792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.376822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.376987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.377016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.377167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.377214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.377441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.377472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.377637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.377666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.377823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.377852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.378017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.378047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.378192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.378222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.378460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.378470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.378647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.937 [2024-07-15 22:43:20.378657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.937 qpair failed and we were unable to recover it. 00:26:56.937 [2024-07-15 22:43:20.378871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.378901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.379121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.379150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.379330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.379361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.379677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.379687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.379824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.379833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.379962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.379972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.380128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.380139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.380324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.380335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.380471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.380481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.380595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.380606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.380788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.380798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.380918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.380929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.381094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.381104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.381209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.381218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.381400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.381411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.381546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.381556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.381686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.381696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.381823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.381833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.381948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.381959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.382100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.382110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.382323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.382354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.382520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.382549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.382703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.382733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.382957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.382986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.383218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.383257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.383428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.383457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.383665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.383674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.383809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.383819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.383933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.383943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.384120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.384131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.384322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.384333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.384468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.384492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.938 [2024-07-15 22:43:20.384664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.938 [2024-07-15 22:43:20.384692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.938 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.384814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.384842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.385003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.385039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.385156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.385166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.385426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.385457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.385677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.385707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.385948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.385977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.386131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.386160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.386322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.386352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.386507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.386516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.386630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.386640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.386721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.386730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.386907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.386917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.387045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.387054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.387172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.387183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.387322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.387332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.387455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.387465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.387643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.387655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.387778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.387788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.387901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.387912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.388086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.388096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.388277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.388306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.388471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.388500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.388667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.388696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.388856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.388887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.389105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.389135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.389301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.389338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.389476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.389505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.389675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.389685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.389806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.389816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.389926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.389937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.390152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.390163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.390294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.939 [2024-07-15 22:43:20.390304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.939 qpair failed and we were unable to recover it. 00:26:56.939 [2024-07-15 22:43:20.390523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.390534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.390649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.390659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.390731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.390740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.390854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.390865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.390994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.391007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.391116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.391127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.391315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.391325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.391436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.391446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.391689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.391699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.391879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.391890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.392096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.392106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.392230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.392241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.392417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.392426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.392532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.392542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.392652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.392662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.392869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.392879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.393000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.393011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.393120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.393130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.393270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.393281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.393419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.393429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.393696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.393707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.393907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.393917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.394034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.394044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.394227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.394237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.394363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.394374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.394495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.394505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.394681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.394691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.394816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.394828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.394935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.394944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.395119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.395129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.395267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.395277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.395378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.395388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.940 qpair failed and we were unable to recover it. 00:26:56.940 [2024-07-15 22:43:20.395459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.940 [2024-07-15 22:43:20.395468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.395579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.395590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.395711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.395721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.395828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.395839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.395947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.395957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.396136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.396146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.396282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.396292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.396469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.396479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.396606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.396616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.396749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.396760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.396945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.396955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.397135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.397145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.397266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.397278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.397389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.397399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.397505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.397515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.397625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.397634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.397763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.397774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.397881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.397891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.398073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.398084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.398193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.398202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.398395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.398405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.398588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.398598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.398734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.398744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.398857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.398866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.399041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.399051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.399162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.399172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.399362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.399373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.399554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.399564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.399678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.399688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.399801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.399811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.399918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.399928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.400105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.400116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.400243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.400253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.400377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.400387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.400512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.400522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.400595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.400604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.400721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.400731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.400854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.400864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.400974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.400984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.401103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.401112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.401230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.941 [2024-07-15 22:43:20.401241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.941 qpair failed and we were unable to recover it. 00:26:56.941 [2024-07-15 22:43:20.401425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.401435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.401551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.401561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.401682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.401692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.401871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.401880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.402060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.402090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.402265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.402296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.402459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.402489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.402642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.402652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.402791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.402800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.402985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.402996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.403115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.403125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.403249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.403262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.403372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.403382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.403487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.403497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.403681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.403691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.403819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.403829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.404020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.404031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.404155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.404165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.404299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.404309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.404495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.404505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.404621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.404631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.404829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.404838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.405025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.405054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.405271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.405301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.405462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.405492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.405746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.405756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.405864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.405874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.406002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.406014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.406163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.406191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.406422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.406452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.406613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.406643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.406800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.406830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.406984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.407012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.407173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.407201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.407366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.407377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.407483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.407493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.407674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.407684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.407787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.407796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.407918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.407927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.408107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.408117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.408303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.408313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.408496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.408506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.408688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.408718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.408890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.408918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.409071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.409100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.409279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.409288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.409412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.409423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.942 qpair failed and we were unable to recover it. 00:26:56.942 [2024-07-15 22:43:20.409598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.942 [2024-07-15 22:43:20.409608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.409717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.409727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.409873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.409903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.410120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.410149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.410314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.410350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.410483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.410492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.410602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.410611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.410783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.410793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.410976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.411005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.411174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.411203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.411376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.411406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.411551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.411560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.411669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.411679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.411925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.411934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.412120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.412129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.412249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.412261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.412389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.412399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.412578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.412608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.412894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.412924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.413144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.413174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.413417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.413447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.413664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.413693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.413856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.413885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.414050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.414078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.414239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.414268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.414576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.414606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.414729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.414757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.414992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.415021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.415243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.415272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.415489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.415517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.415739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.415767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.415938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.415968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.416199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.416238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.416408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.416438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.416756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.416787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.417008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.417037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.417200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.417238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.417473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.417503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.417671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.417701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.417919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.417948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.418136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.418166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.418381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.418391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.418571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.418582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.418755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.418764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.418954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.418966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.419074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.419085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.419267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.419278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.419463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.943 [2024-07-15 22:43:20.419473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.943 qpair failed and we were unable to recover it. 00:26:56.943 [2024-07-15 22:43:20.419601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.419611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.419730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.419741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.419849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.419859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.420056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.420066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.420253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.420264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.420388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.420398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.420582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.420592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.420704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.420713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.420900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.420910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.421174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.421184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.421403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.421413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.421534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.421544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.421682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.421692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.421870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.421881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.422045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.422074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.422245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.422274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.422430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.422458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.422654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.422664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.422797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.422807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.422921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.422930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.423161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.423190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.423354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.423384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.423538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.423566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.423785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.423794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.423977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.423988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.424107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.424117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.424232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.424242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.424484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.424494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.424612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.424622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.424761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.424771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.424892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.424902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.425022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.425031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.425140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.425151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.425272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.425282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.425475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.425486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.425609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.425619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.425802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.425813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.425926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.425937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.426061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.426070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.426335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.426346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.426419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.426428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.426628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.426658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.426882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.426911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.427074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.427103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.427278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.427288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.427405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.427415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.427589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.427600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.427714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.944 [2024-07-15 22:43:20.427724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.944 qpair failed and we were unable to recover it. 00:26:56.944 [2024-07-15 22:43:20.427850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.427860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.428040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.428050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.428163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.428173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.428293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.428303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.428419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.428429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.428535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.428544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.428728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.428738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.428850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.428860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.429035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.429045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.429167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.429177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.429307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.429318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.429523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.429534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.429709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.429718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.429844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.429855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.430039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cc000 is same with the state(5) to be set 00:26:56.945 [2024-07-15 22:43:20.430288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.430356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.430591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.430606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.430794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.430808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.430927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.430941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.431067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.431081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.431313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.431327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.431460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.431493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.431649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.431679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.431853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.431882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.432100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.432130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.432293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.432324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.432547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.432560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.432779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.432792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.433045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.433058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.433265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.433280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.433405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.433419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.433645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.433675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.433893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.433923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.434090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.434120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.434404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.434418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.434606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.434619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.434757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.434799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.435013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.435043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.435207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.435251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.435423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.435437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.435625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.945 [2024-07-15 22:43:20.435654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.945 qpair failed and we were unable to recover it. 00:26:56.945 [2024-07-15 22:43:20.435818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.435848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.436008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.436042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.436261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.436293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.436548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.436578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.436824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.436838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.436958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.436972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.437052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.437065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.437202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.437214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.437488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.437518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.437842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.437872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.438026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.438055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.438235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.438245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.438363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.438372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.438476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.438487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.438664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.438674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.438794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.438805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.438905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.438915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.439089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.439099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.439281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.439292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.439430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.439440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.439548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.439558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.439670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.439680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.439885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.439895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.440014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.440024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.440169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.440179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.440337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.440367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.440598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.440629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.440913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.440943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.441293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.441324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.441576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.441605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.441775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.441804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.441968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.441998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.442219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.442282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.442435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.442464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.442760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.442770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.442947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.442957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.443104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.443114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.443370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.443401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.443651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.443681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.443887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.443916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.444173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.444203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.444382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.444417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.444567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.444577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.444703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.444713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.444855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.444865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.445050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.445060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.445178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.445187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.946 [2024-07-15 22:43:20.445362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.946 [2024-07-15 22:43:20.445372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.946 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.445492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.445501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.445677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.445687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.445811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.445821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.445939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.445949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.446063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.446072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.446194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.446204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.446353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.446365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.446479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.446489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.446594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.446604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.446695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.446704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.446818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.446828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.447009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.447020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.447132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.447142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.447272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.447282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.447411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.447421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.447533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.447542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.447663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.447673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.447786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.447795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.447970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.447981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.448152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.448164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.448344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.448355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.448480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.448490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.448623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.448633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.448756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.448766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.448877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.448887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.449008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.449018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.449197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.449207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.449385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.449395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.449570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.449580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.449705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.449714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.450011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.450041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.450213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.450252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.450405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.450433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.450660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.450671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.450861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.450872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.451045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.451055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.451163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.451173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.451347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.451357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.451480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.451490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.451680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.451690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.451811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.451821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.451942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.451952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.452140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.452150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.452352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.452363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.452535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.452545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.452670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.947 [2024-07-15 22:43:20.452680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.947 qpair failed and we were unable to recover it. 00:26:56.947 [2024-07-15 22:43:20.452927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.452937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.453059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.453070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.453243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.453253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.453369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.453379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.453490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.453500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.453760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.453771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.453885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.453895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.454008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.454019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.454141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.454151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.454347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.454358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.454577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.454587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.454851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.454861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.455048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.455058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.455242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.455252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.455379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.455390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.455515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.455526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.455660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.455670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.455835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.455846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.455993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.456023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.456212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.456252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.456433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.456461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.456689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.456698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.456878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.456888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.457003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.457013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.457206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.457216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.457399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.457409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.457682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.457712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.457947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.457981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.458292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.458323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.458591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.458620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.458867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.458877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.459000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.459010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.459261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.459290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.459516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.459545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.459830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.459860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.460031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.460060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.460296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.460326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.460504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.460533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.460847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.460876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.461164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.461193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.461436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.461446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.948 qpair failed and we were unable to recover it. 00:26:56.948 [2024-07-15 22:43:20.461633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.948 [2024-07-15 22:43:20.461643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.461911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.461920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.462061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.462071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.462292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.462303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.462488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.462498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.462615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.462627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.462803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.462813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.462951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.462961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.463088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.463098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.463317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.463327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.463512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.463522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.463634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.463645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.463827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.463856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.464034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.464068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.464311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.464326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.464520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.464533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.464667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.464680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.464809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.464822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.464966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.464980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.465099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.465113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.465334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.465348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.465552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.465565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.465751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.465764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.466022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.466051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.466267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.466298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.466468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.466497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.466724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.466740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.466940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.466954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.467141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.467154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.467278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.467292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.467565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.467594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.467820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.467849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.468069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.468098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.468267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.468281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.468539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.468569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.468883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.468913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.469134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.469163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.469384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.469398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.469548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.469562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.469832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.469862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.470092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.470123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.470346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.470360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.470551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.470565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.470837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.470851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.471106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.471119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.471331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.471345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.471556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.471570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.949 [2024-07-15 22:43:20.471700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.949 [2024-07-15 22:43:20.471713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.949 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.471913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.471927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.472047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.472060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.472278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.472297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.472519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.472532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.472663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.472677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.472844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.472878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.473101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.473134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.473331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.473343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.473542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.473552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.473685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.473695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.473819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.473828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.474015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.474025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.474271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.474281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.474471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.474500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.474663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.474691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.474860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.474888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.475037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.475067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.475294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.475325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.475585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.475620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.475835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.475865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.476086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.476116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.476406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.476437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.476608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.476637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.476949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.476959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.477164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.477173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.477448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.477458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.477698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.477708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.477946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.477955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.478154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.478164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.478346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.478356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.478475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.478485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.478675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.478685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.478865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.478874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.478994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.479004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.479141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.479151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.479344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.479354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.479488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.479498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.479624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.479634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.479761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.479771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.479890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.479901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.480030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.480040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.480296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.480307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.480575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.480585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.480825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.480834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.480958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.480967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.481088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.481107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.481242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.950 [2024-07-15 22:43:20.481257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.950 qpair failed and we were unable to recover it. 00:26:56.950 [2024-07-15 22:43:20.481458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.481494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.481788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.481819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.482046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.482076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.482244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.482274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.482436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.482466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.482795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.482825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.483011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.483041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.483322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.483354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.483588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.483618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.483836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.483865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.484022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.484051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.484345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.484379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.484636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.484649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.484890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.484903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.485033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.485048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.485326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.485358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.485589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.485619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.485852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.485882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.486117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.486146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.486449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.486463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.486710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.486724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.486940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.486954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.487198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.487212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.487381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.487415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.487681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.487713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.487955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.487994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.488242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.488273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.488498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.488528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.488740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.488754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.489001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.489014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.489210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.489230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.489430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.489444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.489644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.489673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.489969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.489998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.490162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.490191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.490391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.490422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.490649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.490678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.490889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.490919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.491235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.491272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.491523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.491536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.491626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.491639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.491913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.491926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.492123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.492136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.492350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.492364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.492565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.492578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.492846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.492860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.493070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.493083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.493214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.951 [2024-07-15 22:43:20.493231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.951 qpair failed and we were unable to recover it. 00:26:56.951 [2024-07-15 22:43:20.493370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.493384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.493632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.493645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.493860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.493874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.494060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.494074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.494346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.494376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.494546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.494576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.494862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.494891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.495136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.495166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.495376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.495389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.495645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.495659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.495788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.495801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.495982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.495995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.496261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.496292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.496519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.496532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.496736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.496749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.496929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.496943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.497246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.497276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.497466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.497507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.497744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.497773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.497950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.497979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.498220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.498256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.498512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.498526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.498729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.498742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.498933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.498946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.499218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.499236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.499381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.499394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.499576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.499589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.499717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.499730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.499979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.499992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.500188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.500201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.500419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.500433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.500639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.500653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.500760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.500773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.501023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.501036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.501289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.501303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.501433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.501446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.501712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.501726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.501945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.501958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.502143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.502156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.502342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.502356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.952 [2024-07-15 22:43:20.502645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.952 [2024-07-15 22:43:20.502674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.952 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.502958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.502987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.503171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.503199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.503441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.503472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.503705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.503735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.504037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.504050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.504237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.504251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.504401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.504415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.504633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.504662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.504821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.504850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.505136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.505165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.505329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.505360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.505525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.505554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.505725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.505754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.506038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.506068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.506291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.506320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.506540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.506569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.506783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.506799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.507008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.507021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.507244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.507274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.507511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.507540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.507693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.507722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.507943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.507957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.508238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.508252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.508454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.508467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.508688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.508701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.508942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.508972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.509233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.509263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.509425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.509454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.509715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.509744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.510051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.510080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.510381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.510411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.510725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.510754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.510976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.511005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.511243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.511273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.511551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.511564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.511775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.511788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.511984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.511997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.512192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.512221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.512469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.512499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.512732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.512745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.512938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.512952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.513077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.513091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.513241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.513255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.513449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.513462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.513713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.513751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.514061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.514089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.514272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.514302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.953 qpair failed and we were unable to recover it. 00:26:56.953 [2024-07-15 22:43:20.514468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.953 [2024-07-15 22:43:20.514498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.514775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.514788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.515026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.515039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.515336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.515349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.515498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.515512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.515761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.515775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.515909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.515923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.516120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.516133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.516316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.516330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.516516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.516551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.516780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.516809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.516962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.516992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.517222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.517261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.517475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.517504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.517806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.517836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.518087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.518116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.518334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.518364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.518605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.518618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.518866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.518880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.519126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.519139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.519333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.519348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.519548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.519561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.519745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.519759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.519930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.519944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.520221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.520259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.520440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.520469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.520764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.520794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.521032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.521061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.521185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.521214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.521450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.521480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.521765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.521794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.521949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.521979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.522273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.522304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.522470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.522500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.522740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.522770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.523052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.523066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.523346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.523359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.523607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.523620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.523802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.523816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.524038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.524067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.524361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.524391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.524561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.524591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.524849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.524878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.525092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.525121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.525405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.525435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.525615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.525644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.525895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.525924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.526171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.954 [2024-07-15 22:43:20.526200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.954 qpair failed and we were unable to recover it. 00:26:56.954 [2024-07-15 22:43:20.526479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.526546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.526736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.526777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.527086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.527118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.527293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.527326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.527503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.527532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.527832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.527861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.528118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.528147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.528429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.528461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.528676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.528686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.528954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.528964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.529144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.529153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.529341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.529351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.529471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.529481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.529608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.529618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.529857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.529867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.530062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.530072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.530259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.530269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.530392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.530403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.530584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.530594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.530787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.530797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.530925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.530935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.531066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.531076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.531189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.531199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.531321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.531331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.531522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.531532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.531774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.531785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.531977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.531987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.532264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.532274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.532396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.532406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.532613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.532623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.532694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.532703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.532910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.532920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.533110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.533120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.533393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.533403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.533525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.533535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.533673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.533682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.533814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.533825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.534068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.534079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.534335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.534365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.534529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.534558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.534782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.534811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.534982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.535016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.535261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.535293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.535521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.535550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.535716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.535726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.535905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.535915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.536052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.536061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.955 [2024-07-15 22:43:20.536304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.955 [2024-07-15 22:43:20.536315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.955 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.536501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.536511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.536592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.536601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.536787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.536797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.537010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.537041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.537275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.537306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.537602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.537631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.537921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.537931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.538175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.538185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.538312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.538322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.538512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.538522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.538737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.538767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.539028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.539057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.539221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.539263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.539514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.539544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.539764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.539805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.540046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.540056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.540241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.540251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.540526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.540556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.540851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.540880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.541164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.541193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.541567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.541635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.541891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.541906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.542126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.542141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.542351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.542366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.542507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.542521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.542793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.542807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.543102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.543115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.543300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.543314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.543516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.543529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.543738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.543751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.543873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.543887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.544069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.544083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.544349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.544363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.544636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.544674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.544849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.544879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.545007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.545036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.545254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.545285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.545547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.545577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.545793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.545822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.546049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.546063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.546206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.956 [2024-07-15 22:43:20.546220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.956 qpair failed and we were unable to recover it. 00:26:56.956 [2024-07-15 22:43:20.546394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.546408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.546600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.546614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.546891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.546905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.547101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.547115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.547326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.547341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.547540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.547554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.547740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.547753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.548015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.548028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.548237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.548268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.548529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.548559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.548838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.548852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.548997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.549011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.549236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.549250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.549444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.549458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.549585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.549599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.549847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.549861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.549990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.550004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.550134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.550147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.550361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.550374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.550508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.550524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.550660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.550673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.550816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.550829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.551104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.551117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.551253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.551268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.551456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.551485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.551768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.551797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.552032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.552061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.552281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.552310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.552599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.552613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.552737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.552750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.552996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.553009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.553137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.553150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.553422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.553439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.553631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.553644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.553845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.553858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.553983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.553997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.554271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.554285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.554481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.554494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.554632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.554645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.554764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.554778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.554911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.554924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.555068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.555081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.555403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.555417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.555605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.555618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.555810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.555823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.556005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.957 [2024-07-15 22:43:20.556018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.957 qpair failed and we were unable to recover it. 00:26:56.957 [2024-07-15 22:43:20.556219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.556236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.556368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.556381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.556585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.556598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.556722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.556736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.556936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.556949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.557087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.557100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.557355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.557369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.557578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.557591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.557783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.557796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.558013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.558026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.558276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.558289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.558433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.558447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.558712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.558726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.558927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.558961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.559168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.559180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.559361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.559372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.559557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.559567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.559754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.559764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.559973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.559983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.560172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.560181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.560367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.560377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.560565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.560575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.560767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.560777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.560892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.560902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.561105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.561115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.561230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.561240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.561364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.561375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.561577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.561587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.561772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.561781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.561903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.561913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.562100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.562111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.562300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.562310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.562438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.562448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.562624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.562633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.562752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.562762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.562887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.562896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.563138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.563148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.563269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.563279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.563479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.563489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.563744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.563754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.563916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.563926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.564118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.564128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.564324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.564334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.564525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.564553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.564803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.564832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.565064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.565094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.565434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.565465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.565758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.958 [2024-07-15 22:43:20.565768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.958 qpair failed and we were unable to recover it. 00:26:56.958 [2024-07-15 22:43:20.565959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.565969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.566213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.566223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.566425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.566436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.566527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.566536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.566670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.566680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.566812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.566858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.567104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.567135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.567448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.567479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.567699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.567729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.568009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.568023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.568265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.568298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.568532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.568561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.568850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.568879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.569044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.569073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.569262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.569294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.569537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.569568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.569836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.569865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.570122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.570135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.570279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.570293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.570502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.570532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.570815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.570844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.571084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.571098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.571291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.571305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.571502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.571516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.571813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.571843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.572020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.572049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.572290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.572322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.572549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.572579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.572883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.572896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.573100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.573114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.573316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.573332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.573573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.573586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.573788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.573805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.574018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.574032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.574282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.574296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.574492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.574505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.574733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.574747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.574992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.575006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.575235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.575249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.575518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.575531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.575752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.959 [2024-07-15 22:43:20.575766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.959 qpair failed and we were unable to recover it. 00:26:56.959 [2024-07-15 22:43:20.575982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.576011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.576248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.576292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.576475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.576506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.576674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.576703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.577002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.577016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.577323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.577340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.577534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.577547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.577795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.577809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.578076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.578090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.578241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.578255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.578482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.578496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.578710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.578723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.578999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.579029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.579297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.579330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.579551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.579581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.579753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.579783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.580029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.580043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.580243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.580258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.580461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.580478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.580622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.580637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.580780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.580794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.580931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.580945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.581158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.581172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.581364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.581379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.581593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.581607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.581751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.581766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.582019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.582049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.582219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.582262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.582495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.582525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.582697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.582712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.582893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.582923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.583165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.583195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.583446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.583478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.583740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.583771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.584074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.584104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.584395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.584426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.584687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.584717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.585037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.585074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.585216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.585234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.585457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.585488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.585770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.585800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.585969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.585999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.586269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.586283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.586427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.586441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.586651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.586664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.586878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.586895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.587029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.587043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.587267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.587281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.960 qpair failed and we were unable to recover it. 00:26:56.960 [2024-07-15 22:43:20.587504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.960 [2024-07-15 22:43:20.587533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.587777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.587807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.588026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.588063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.588267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.588281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.588532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.588546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.588742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.588756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.588940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.588954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.589147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.589161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.589290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.589304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.589501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.589516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.589734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.589748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.589896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.589930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.590080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.590106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.590303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.590315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.590457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.590467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.590608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.590619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.590746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.590756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.590946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.590956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.591163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.591193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.591360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.591391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.591534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.591564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.591706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.591716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.591964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.591974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.592172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.592182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.592445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.592484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.592715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.592743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.592930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.592962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.593078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.593087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.593276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.593285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.593545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.593555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.593726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.593735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.593923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.593932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.594151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.594159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.594347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.594356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.594530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.594539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.594719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.594729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.594913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.594922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.595163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.595172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.595354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.595363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.595574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.595584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.595696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.595705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.595833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.595842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.595968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.595978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.596193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.596203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.596394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.596403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.596523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.596533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.596709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.596718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.596790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.961 [2024-07-15 22:43:20.596799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.961 qpair failed and we were unable to recover it. 00:26:56.961 [2024-07-15 22:43:20.596924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.596933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.597119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.597128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.597371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.597381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.597508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.597525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.597723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.597736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.597862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.597875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.598028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.598040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.598240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.598254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.598393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.598406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.598589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.598602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.598733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.598747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.599032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.599046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.599271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.599285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.599457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.599471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.599547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.599560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.599808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.599822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.599966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.599984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.600088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.600102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.600295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.600309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.600448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.600461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.600665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.600679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.600794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.600807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.601028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.601041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.601178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.601191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.601324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.601338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.601557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.601571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.601754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.601767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.601951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.601965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.602186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.602199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.602344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.602358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.602629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.602643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.602845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.602859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.602965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.602978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.603248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.603262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.603404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.603417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.603623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.603636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.603821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.603834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.603968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.603982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.604203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.604216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.604402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.604433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.604664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.604694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.604874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.604904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.605135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.605148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.605424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.605458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.605672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.605686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.605826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.605840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.606089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.606103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.962 [2024-07-15 22:43:20.606244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.962 [2024-07-15 22:43:20.606259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.962 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.606504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.606535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.606749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.606779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.606993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.607023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.607234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.607249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.607435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.607449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.607566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.607580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.607764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.607778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.608008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.608021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.608219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.608242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.608521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.608550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.608857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.608887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.609047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.609060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.609234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.609248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.609444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.609458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.609696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.609710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.609884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.609897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.610150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.610180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.610346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.610377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.610604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.610634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.610857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.610871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.611052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.611066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.611198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.611212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.611357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.611371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.611513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.611526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.611773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.611787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.611967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.611980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.612094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.612108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.612292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.612306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.612514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.612527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.612655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.612668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.612855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.612869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.612994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.613007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.613255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.613269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.613527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.613541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.613736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.613750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.613888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.613902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.614015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.614028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.614233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.614247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.614460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.614473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.614726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.614739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.614957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.614971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.963 [2024-07-15 22:43:20.615122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.963 [2024-07-15 22:43:20.615136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.963 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.615363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.615393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.615661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.615690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.615852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.615866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.616002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.616015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.616313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.616327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.616549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.616562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.616766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.616783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.616913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.616927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.617217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.617241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.617490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.617504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.617719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.617733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.617964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.617993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.618160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.618190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.618484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.618515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.618742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.618772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.619016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.619045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.619369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.619383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.619599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.619612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.619712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.619725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.619925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.619938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.620167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.620180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.620311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.620326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.620590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.620604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.620732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.620745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.620873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.620887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.621087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.621100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.621301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.621315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.621517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.621530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.621664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.621678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.621852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.621865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.622188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.622217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.622444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.964 [2024-07-15 22:43:20.622474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.964 qpair failed and we were unable to recover it. 00:26:56.964 [2024-07-15 22:43:20.622705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.622735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.622974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.623001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.623294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.623328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.623470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.623485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.623672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.623702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.623917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.623947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.624097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.624111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.624294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.624308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.624429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.624443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.624572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.624586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.624717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.624730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.624957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.624986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.625218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.625272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.625559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.625590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.625823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.625866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.626084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.626125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.626324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.626339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.626531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.626544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.626729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.626743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.626993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.627007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.627251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.627265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.627455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.627469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.627716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.627730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.627848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.627861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.628052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.628066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.628212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.628230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.628497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.628526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.628697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.628727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.628898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.628928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.629136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.629149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.629401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.629415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.629601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.629614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.629764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.629777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.629946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.629960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.630155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.630169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.630375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.630389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.630481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.630494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.630633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.630647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.630846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.630860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.630949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.630962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.965 [2024-07-15 22:43:20.631140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.965 [2024-07-15 22:43:20.631153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:56.965 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.631303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.631317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.631529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.631540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.631647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.631656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.631834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.631845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.632064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.632094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.632246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.632277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.632505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.632535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.632752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.632762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.632881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.632890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.633067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.633078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.633256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.633267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.633445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.633455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.633635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.633645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.633890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.633900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.634098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.634108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.634295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.634305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.634491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.634500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.634742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.634753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.634946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.634956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.635126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.635136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.635309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.635319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.635494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.635505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.635745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.635756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.635885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.635895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.636161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.636171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.636276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.636287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.636558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.636568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.636814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.636824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.637072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.637082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.637212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.637222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.637406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.637417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.637531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.637540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.637653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.637663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.637871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.637880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.637991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.638001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.638122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.638131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.638315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.638326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.638518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.638528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.638634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.638643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.638750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.638760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.966 qpair failed and we were unable to recover it. 00:26:56.966 [2024-07-15 22:43:20.638954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.966 [2024-07-15 22:43:20.638965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.639077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.639087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.639319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.639329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.639575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.639585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.639769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.639779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.639914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.639923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.640116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.640126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.640315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.640325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.640568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.640578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.640694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.640704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.640875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.640886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.641014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.641024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.641208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.641218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.641332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.641343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.641605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.641615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.641809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.641819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.641943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.641952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.642136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.642147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.642403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.642413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.642539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.642549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.642820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.642849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.643019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.643047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.643282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.643311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.643558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.643586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.643821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.643851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.644106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.644116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.644304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.644314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.644449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.644460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.644582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.644592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.644782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.644792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.645056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.645065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.645264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.645274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.645461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.645470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.645689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.645700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.645892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.645902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.646087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.646097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.646347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.646357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.646550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.646560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.967 [2024-07-15 22:43:20.646753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.967 [2024-07-15 22:43:20.646762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.967 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.647005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.647015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.647157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.647169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.647294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.647304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.647444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.647454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.647628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.647638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.647850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.647860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.647979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.647989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.648163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.648173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.648353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.648383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.648613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.648641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.648885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.648916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.649227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.649238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.649526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.649536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.649723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.649733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.649922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.649932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.650063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.650073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.650249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.650260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.650469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.650479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.650718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.650728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.650852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.650862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.650999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.651009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.651198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.651208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.651475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.651485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.651740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.651770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.652053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.652082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.652314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.652345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.652596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.652626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.652809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.652838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.653129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.653159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.653385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.653396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.653540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.653550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.653676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.653686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.653928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.653937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.654144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.654154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.968 qpair failed and we were unable to recover it. 00:26:56.968 [2024-07-15 22:43:20.654361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.968 [2024-07-15 22:43:20.654371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.654495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.654505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.654760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.654770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.654951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.654961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.655220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.655234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.655354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.655363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.655604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.655614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.655855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.655866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.656064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.656074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.656183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.656193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.656378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.656389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.656575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.656585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.656701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.656711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.656841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.656850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.657026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.657036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.657250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.657261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.657463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.657473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.657661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.657670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.657907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.657918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.658032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.658043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.658309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.658319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.658528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.658538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.658753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.658762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.658892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.658902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.659131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.659141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.659284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.659295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.659503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.659512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.659684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.659693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.659861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.659891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.660166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.660195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.660407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.660443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.660619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.660649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.660908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.660937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.661096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.661125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.661312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.661327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.661577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.661607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.661771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.661800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.661955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.661984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.662126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.662140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.662409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.662423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.662550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.969 [2024-07-15 22:43:20.662564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.969 qpair failed and we were unable to recover it. 00:26:56.969 [2024-07-15 22:43:20.662881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.662895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.663102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.663115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.663302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.663317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.663533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.663563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.663777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.663807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.663961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.663991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.664136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.664152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.664350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.664364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.664486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.664500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.664697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.664711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.664896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.664910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.665166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.665180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.665317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.665331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.665472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.665485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.665733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.665747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.665939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.665952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.666139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.666153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.666341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.666355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.666562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.666576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.666825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.666839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.667038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.667051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.667258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.667272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.667522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.667535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.667808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.667822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.668080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.668093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.668292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.668306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.668450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.668463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.668665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.668679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.668808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.668821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.669029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.669042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.669267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.669281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.669412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.669426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.669619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.669633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.669834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.669848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.670067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.670081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.670272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.670286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.670557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.670571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.670819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.670833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.671111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.671125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.671319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.671333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.671479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.970 [2024-07-15 22:43:20.671493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.970 qpair failed and we were unable to recover it. 00:26:56.970 [2024-07-15 22:43:20.671621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.671634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.671881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.671895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.672087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.672101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.672243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.672258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.672451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.672465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.672736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.672772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.672995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.673024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.673288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.673302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.673551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.673564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.673746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.673760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.673961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.673974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.674239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.674253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.674465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.674478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.674748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.674761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.674943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.674957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.675140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.675154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.675351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.675365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.675516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.675529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.675777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.675791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.675980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.675994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.676122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.676136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.676264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.676278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.676462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.676476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.676724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.676738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.676933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.676947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.677072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.677086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.677281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.677295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.677496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.677509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.677737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.677751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.677882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.677895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.678093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.678106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.678302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.678333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.678508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.678538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.678772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.678801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.679081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.679095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.679343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.679357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.679489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.679502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.679648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.679662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.679794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.679807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.680056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.680069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.971 [2024-07-15 22:43:20.680265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.971 [2024-07-15 22:43:20.680286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.971 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.680469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.680483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.680632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.680645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.680824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.680837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.681067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.681081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.681278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.681297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.681429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.681443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.681585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.681598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.681850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.681864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.682009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.682022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.682268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.682282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.682463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.682477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.682620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.682633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.682833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.682847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.683093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.683107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.683375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.683389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.683505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.683518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.683738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.683751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.684025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.684039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.684252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.684267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.684411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.684425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.684616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.684630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.684828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.684841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.685062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.685075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.685324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.685338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.685539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.685553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.685721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.685734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.686006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.686019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.686220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.686237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.686421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.686434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.686549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.686563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.686756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.686770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.686965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.686979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.687173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.687186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.687458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.687471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.972 [2024-07-15 22:43:20.687652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.972 [2024-07-15 22:43:20.687665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.972 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.687795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.687808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.687933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.687946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.688146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.688159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.688410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.688424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.688551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.688564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.688809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.688822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.689022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.689035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.689322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.689336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.689534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.689548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.689769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.689786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.690046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.690059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.690141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.690154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.690367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.690381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.690579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.690592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.690790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.690803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.691045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.691058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.691196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.691210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.691363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.691393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.691623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.691652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.691933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.691962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.692140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.692169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.692419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.692433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.692619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.692632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.692934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.692947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.693129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.693142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.693401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.693415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.693560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.693574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.693765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.693779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.693906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.693921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.694059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.694073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.694199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.694212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.694420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.694434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.694625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.694635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.694771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.694781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.694967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.694977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.695155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.695185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.695568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.695638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.695869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.695901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.696053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.696094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.696246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.696259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.973 qpair failed and we were unable to recover it. 00:26:56.973 [2024-07-15 22:43:20.696460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.973 [2024-07-15 22:43:20.696473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.696731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.696745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.696968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.696981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.697118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.697133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.697256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.697270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.697406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.697420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.697619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.697632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.697820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.697833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.697953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.697966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.698215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.698235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.698431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.698445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.698583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.698596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.698849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.698863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.699052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.699066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.699296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.699326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.699485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.699515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.699854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.699883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.700161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.700174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.700304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.700318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.700585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.700598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.700736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.700750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.700934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.700947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.701126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.701139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.701333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.701347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.701627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.701640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.701831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.701844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.702129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.702142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.702412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.702426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.702615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.702644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.702874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.702903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.703085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.703114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.703323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.703337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.703586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.703599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.703797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.703810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.703993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.704006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.704199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.704212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.704452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.704471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.704658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.704673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.704858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.704872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.705134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.705148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.705389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.974 [2024-07-15 22:43:20.705404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.974 qpair failed and we were unable to recover it. 00:26:56.974 [2024-07-15 22:43:20.705624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.705640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.705879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.705893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.706149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.706163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.706288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.706302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.706492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.706507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.706732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.706762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.707001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.707032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.707258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.707273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.707473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.707487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.707636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.707650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.707876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.707890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.708136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.708150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.708344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.708358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.708509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.708523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.708619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.708635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.708886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.708900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.709028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.709041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.709159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.709173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.709367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.709381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.709505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.709518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.709719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.709733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.709865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.709879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.710046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.710062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.710256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.710271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.710455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.710469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.710592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.710606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.710800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.710814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.710998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.711012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.711209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.711223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.711428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.711442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.711669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.711683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.711808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.711822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.712023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.712037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.712185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.712199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.712412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.712444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.712609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.712640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.712888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.712919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.713198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.713212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.713492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.713506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.713722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.713753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.714038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.975 [2024-07-15 22:43:20.714068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.975 qpair failed and we were unable to recover it. 00:26:56.975 [2024-07-15 22:43:20.714358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.714389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.714556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.714586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.714769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.714800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.715090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.715103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.715306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.715320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.715531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.715545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.715743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.715756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.715949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.715962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.716165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.716200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.716430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.716461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.716627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.716658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.716946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.716959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.717222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.717246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.717379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.717393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.717668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.717698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.717877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.717908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.718128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.718158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.718379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.718411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.718634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.718665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.718834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.718865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.719077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.719091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.719368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.719383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.719661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.719675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.719909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.719922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.720229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.720243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.720520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.720534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.720739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.720753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.720999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.721013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.721229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.721243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.721385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.721413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.721635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.721664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.721844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.721874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.722032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.722046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.722238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.722276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.976 [2024-07-15 22:43:20.722526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.976 [2024-07-15 22:43:20.722556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.976 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.722739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.722774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.723074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.723088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.723238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.723252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.723374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.723388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.723663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.723676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.723829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.723842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.724110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.724140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.724300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.724331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.724561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.724590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.724905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.724936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.725219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.725273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.725558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.725588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.725828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.725857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.726166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.726196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.726455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.726486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.726730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.726759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.727006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.727036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.727314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.727329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.727529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.727543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.727744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.727758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.727945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.727959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.728205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.728220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.728360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.728375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.728622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.728636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.728768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.728782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.728912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.728925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.729103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.729117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.729345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.729359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.729553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.729566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.729748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.729761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.730030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.730044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.730271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.730286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.730483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.730497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.730629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.730643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.730905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.730919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.731121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.731135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.731287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.731302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.731431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.731444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.731644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.731658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.731843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.977 [2024-07-15 22:43:20.731858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.977 qpair failed and we were unable to recover it. 00:26:56.977 [2024-07-15 22:43:20.732042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.732055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.732292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.732319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.732501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.732513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.732764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.732795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.733012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.733042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.733240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.733272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.733558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.733568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.733675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.733685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.733823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.733832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.734044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.734053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.734266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.734298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.734587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.734617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.734746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.734776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.734927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.734956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.735196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.735244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.735432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.735442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.735634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.735644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.735780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.735790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.735899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.735909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.736036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.736046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.736220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.736234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.736489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.736519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.736755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.736785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.736953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.736983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.737200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.737209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.737403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.737413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.737620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.737650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.737962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.737992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.738307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.738339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.738504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.738534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.738715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.738745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.738948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.738977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.739261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.739292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.739601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.739630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.739923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.739953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.740176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.740205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.740507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.740537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.740701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.740731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.740961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.740991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.741166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.741196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.741501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.978 [2024-07-15 22:43:20.741512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.978 qpair failed and we were unable to recover it. 00:26:56.978 [2024-07-15 22:43:20.741689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.741699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.741888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.741898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.742070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.742080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.742264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.742290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.742422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.742450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.742629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.742659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.742907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.742937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.743154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.743165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.743352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.743362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.743522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.743532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.743712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.743742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.743857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.743887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.744115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.744145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.744454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.744485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.744711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.744741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.745011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.745041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.745242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.745274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.745489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.745499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.745786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.745796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.745975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.745985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.746113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.746123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.746301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.746312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.746525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.746535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.746654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.746664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.746787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.746797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.746914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.746924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.747049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.747060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.747234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.747244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.747354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.747364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.747520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.747530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.747716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.747726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.747855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.747865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.748037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.748047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.748241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.748272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.748485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.748515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.748747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.748776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.749072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.749102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.749247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.749257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.749468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.749478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.749619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.749629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.749904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.979 [2024-07-15 22:43:20.749915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.979 qpair failed and we were unable to recover it. 00:26:56.979 [2024-07-15 22:43:20.750026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.750036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.750243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.750269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.750494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.750504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.750631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.750641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.750886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.750895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.751135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.751144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.751378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.751409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.751667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.751696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.751914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.751944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.752173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.752182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.752368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.752378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.752569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.752579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.752709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.752719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.752852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.752862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.753105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.753135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.753416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.753447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.753754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.753784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.753969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.753999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.754310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.754340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.754520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.754550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.754781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.754811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.755068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.755097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.755316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.755346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.755598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.755628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.755858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.755888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.756105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.756135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.756313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.756323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.756571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.756601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.756906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.756935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.757175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.757205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.757472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.757504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.757860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.757890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.758147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.758157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.758333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.758343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.758612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.758622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.758811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.758841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.980 qpair failed and we were unable to recover it. 00:26:56.980 [2024-07-15 22:43:20.759058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.980 [2024-07-15 22:43:20.759088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.759375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.759407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.759623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.759633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.759852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.759863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.760038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.760048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.760243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.760274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.760579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.760609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.760843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.760872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.761035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.761065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.761351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.761381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.761545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.761554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.761692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.761702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.761831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.761841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.762017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.762059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.762342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.762372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.762603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.762632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.762870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.762900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.763065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.763075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.763353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.763384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.763570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.763600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.763818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.763847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.764099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.764109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.764310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.764320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.764439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.764449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.764641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.764651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.764918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.764928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.765124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.765134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.765261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.765271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.765400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.765410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.765536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.765546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.765676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.765686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.765857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.765867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.766087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.766097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.766360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.766371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.766557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.766567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.766807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.766816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.767016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.767045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.767279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.767310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.767600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.767630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.981 qpair failed and we were unable to recover it. 00:26:56.981 [2024-07-15 22:43:20.767918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.981 [2024-07-15 22:43:20.767947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.768215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.768252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.768470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.768500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.768683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.768712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.768997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.769031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.769261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.769292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.769535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.769545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.769751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.769760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.769960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.769970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.770099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.770108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.770316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.770327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.770516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.770525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.770647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.770666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.770908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.770918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.771083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.771092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.771302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.771334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.771486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.771516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.771675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.771706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.772018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.772048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.772235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.772266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.772498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.772528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.772761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.772789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.773071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.773101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.773350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.773381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.773612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.773642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.773811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.773841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.774045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.774074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.774358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.774388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.774559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.774589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.774766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.774795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.775114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.775145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.775391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.775423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.775652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.775682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.775911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.775940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.776103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.776113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.776303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.776334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.776566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.776595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.776830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.776860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.777153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.777183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.777424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.777454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.777709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.982 [2024-07-15 22:43:20.777739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.982 qpair failed and we were unable to recover it. 00:26:56.982 [2024-07-15 22:43:20.778021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.778051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.778283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.778313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.778538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.778568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.778748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.778787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.779036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.779046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.779221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.779235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.779475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.779484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.779643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.779653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.779935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.779945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.780214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.780227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.780405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.780415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.780669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.780679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.780857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.780867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.780984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.780994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.781239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.781250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.781367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.781376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.781516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.781526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.781741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.781751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.781926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.781936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.782204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.782214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.782399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.782409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.782612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.782642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.782868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.782898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.783123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.783152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.783387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.783398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.783580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.783590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.783771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.783781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.783958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.783968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.784090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.784099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.784273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.784283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.784473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.784484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.784662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.784672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.784796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.784806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.785048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.785059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.785251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.785262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.785393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.785403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.785529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.785539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.785806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.785816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.786008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.786018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.786153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.786163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.786356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.983 [2024-07-15 22:43:20.786366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.983 qpair failed and we were unable to recover it. 00:26:56.983 [2024-07-15 22:43:20.786503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.786513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.786693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.786703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.786842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.786854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.787067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.787077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.787199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.787208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.787388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.787398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.787575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.787585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.787773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.787803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.788022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.788053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.788361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.788371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.788564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.788574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.788756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.788766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.788960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.788989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.789241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.789251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.789436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.789446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.789664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.789693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.789876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.789905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.790147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.790176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.790446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.790458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.790721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.790732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.790922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.790932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.791171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.791181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.791428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.791438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.791723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.791733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.791907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.791916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.791996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.792005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.792127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.792137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.792395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.792425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.792653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.792683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.792849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.792879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.793051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.793081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.793268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.793298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.793530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.793560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.793730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.793759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.793977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.794006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.794167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.794196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.794502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.794512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.794704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.794714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.794924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.794934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.795106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.795116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.795239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.984 [2024-07-15 22:43:20.795250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.984 qpair failed and we were unable to recover it. 00:26:56.984 [2024-07-15 22:43:20.795371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.795381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.795501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.795513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.795692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.795702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.795810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.795819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.795975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.795985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.796177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.796186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.796458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.796489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.796783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.796812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.796996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.797025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.797210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.797265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.797555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.797585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.797750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.797779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.797949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.797980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.798158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.798187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.798413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.798443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.798667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.798677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.798806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.798816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.799007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.799017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.799245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.799276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.799559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.799589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.799823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.799852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.800071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.800103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.800212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.800222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.800499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.800509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.800684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.800694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.800942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.800971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.801211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.801250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.801402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.801431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.801658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.801668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.801823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.801834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.801954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.801964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.802148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.802158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.802350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.802360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.802534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.802543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.985 [2024-07-15 22:43:20.802784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.985 [2024-07-15 22:43:20.802794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.985 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.802986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.802996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.803117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.803127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.803329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.803340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.803580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.803590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.803775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.803785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.803911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.803921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.804120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.804132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.804401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.804411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.804605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.804615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.804824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.804834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.804965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.804975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.805082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.805092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.805356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.805366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.805631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.805641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.805775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.805785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.805972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.805982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.806211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.806251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.806482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.806512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.806796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.806826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.807062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.807092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.807415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.807446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.807704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.807713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.807845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.807855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.808094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.808104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.808293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.808304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.808438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.808448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.808569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.808579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.808709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.808719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.808939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.808949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.809236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.809246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.809500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.809510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.809704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.809713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.809863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.809873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.809997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.810007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.810247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.810257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.810475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.810485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.810696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.810706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.810905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.810916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.986 [2024-07-15 22:43:20.811041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.986 [2024-07-15 22:43:20.811051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.986 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.811259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.811269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.811514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.811543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.811804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.811834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.812183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.812213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.812536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.812566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.812786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.812815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.812981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.813010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.813164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.813198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.813496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.813530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.813790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.813805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.814062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.814076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.814284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.814300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.814560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.814573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.814766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.814780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.815029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.815042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.815293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.815307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.815523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.815537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.815753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.815767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.816049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.816062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.816205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.816218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.816369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.816383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.816529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.816543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.816685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.816698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.816890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.816920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.817152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.817182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.817371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.817404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.817649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.817663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.817852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.817866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.818052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.818065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.818319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.818350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.818633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.818662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.818823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.818852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.819153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.819182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.819357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.819387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.819632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.819667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.819889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.987 [2024-07-15 22:43:20.819901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.987 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-15 22:43:20.820040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.820050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.820230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.820268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.820543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.820573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.820740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.820769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.821023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.821053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.821327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.821338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.821543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.821553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.821822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.821832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.822098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.822108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.822271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.822281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.822464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.822493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.822797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.822832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.823051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.823080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.823242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.823272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.823581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.823610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.823919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.823948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.824257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.824295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.824561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.824571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.824750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.824760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.824960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.824970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.825127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.825137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.825273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.825283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.825499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.825509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.825738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.825748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.825873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.825882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.826062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.826072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.826314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.826325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.826585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.826595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.826810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.826820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.827011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.827021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.827218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.827257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.827426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.827455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.827688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.827717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.827974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.828003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.828242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.828272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.828577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.828606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.828826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.828856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.829110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.829140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.829366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.829408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.829712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.829744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-15 22:43:20.829922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.988 [2024-07-15 22:43:20.829952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.830136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.830166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.830433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.830448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.830647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.830661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.830936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.830950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.831233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.831247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.831448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.831462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.831730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.831744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.831937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.831951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.832235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.832249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.832497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.832511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.832759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.832772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.832981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.832995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.833144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.833158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.833262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.833277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.833527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.833540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.833664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.833678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.833864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.833878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.834012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.834025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.834189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.834203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.834518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.834549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.834881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.834911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.835076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.835106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.835342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.835373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.835561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.835591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.835758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.835776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.836035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.836066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.836324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.836355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.836535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.989 [2024-07-15 22:43:20.836566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-15 22:43:20.836842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.836872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.837177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.837207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.837447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.837478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.837638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.837652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.837745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.837759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.838032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.838046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.838251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.838266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.838411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.838424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.838577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.838591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.838773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.838787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.838983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.838997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.839202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.839239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.839420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.839450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.839672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.839702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.840014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.840044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.840331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.840361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.840608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.840638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.840888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.840918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.841156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.841186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.841323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.841353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.841565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.841579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.841674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.841688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.841889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.841902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.842122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.842138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.842321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.842336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.842538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.842567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.842799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.842830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.843164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.843194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.843375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.843406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.843647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.843677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.843835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.843865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.844102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.844132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.844356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.844370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.844571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.844584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.844775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.844789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.844932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.844946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.845203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.845217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.845407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.845421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.845616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.845629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.845825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.845839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.846032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.846046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.990 qpair failed and we were unable to recover it. 00:26:56.990 [2024-07-15 22:43:20.846249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.990 [2024-07-15 22:43:20.846263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.846445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.846458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.846609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.846622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.846767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.846781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.847008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.847038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.847262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.847294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.847532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.847561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.847852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.847882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.848147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.848177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.848410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.848450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.848670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.848701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.848953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.848984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.849219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.849256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.849476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.849506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.849720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.849734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.849935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.849949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.850138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.850152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.850345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.850360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.850607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.850621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.850792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.850806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.850997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.851010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.851231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.851246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.851447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.851478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.851831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.851899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.852177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.852211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.852445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.852476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.852711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.852741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.853068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.853098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.853382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.853425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.853619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.853633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.853773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.853787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.854039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.854068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.854355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.854386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.854617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.854646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.854982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.855012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.855297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.855328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.855609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.855647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.855808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.855822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.856084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.856114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.856295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.856327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.856586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.856615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.991 qpair failed and we were unable to recover it. 00:26:56.991 [2024-07-15 22:43:20.856841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.991 [2024-07-15 22:43:20.856870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.857104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.857133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.857377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.857408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.857710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.857740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.857905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.857935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.858168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.858208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.858365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.858379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.858512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.858526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.858649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.858663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.858862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.858876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.859103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.859133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.859311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.859341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.859512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.859542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.859789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.859803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.859932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.859946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.860141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.860155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.860388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.860419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.860701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.860730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.860890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.860919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.861136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.861165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.861393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.861423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.861553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.861566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.861790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.861858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.862112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.862145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.862394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.862426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.862597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.862628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.862870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.862901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.863152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.863182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.863360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.863370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.863557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.863586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.863759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.863788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.863941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.863971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.864210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.864246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.864532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.864542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.864729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.992 [2024-07-15 22:43:20.864739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.992 qpair failed and we were unable to recover it. 00:26:56.992 [2024-07-15 22:43:20.864956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.993 [2024-07-15 22:43:20.864969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.993 qpair failed and we were unable to recover it. 00:26:56.993 [2024-07-15 22:43:20.865105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.993 [2024-07-15 22:43:20.865115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.993 qpair failed and we were unable to recover it. 00:26:56.993 [2024-07-15 22:43:20.865291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.993 [2024-07-15 22:43:20.865301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.993 qpair failed and we were unable to recover it. 00:26:56.993 [2024-07-15 22:43:20.865486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.993 [2024-07-15 22:43:20.865515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.993 qpair failed and we were unable to recover it. 00:26:56.993 [2024-07-15 22:43:20.865742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.993 [2024-07-15 22:43:20.865772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.993 qpair failed and we were unable to recover it. 00:26:56.993 [2024-07-15 22:43:20.866083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.993 [2024-07-15 22:43:20.866113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.993 qpair failed and we were unable to recover it. 00:26:56.993 [2024-07-15 22:43:20.866294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:56.993 [2024-07-15 22:43:20.866325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:56.993 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.866475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.866506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.274 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.866765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.866777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.274 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.866994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.867005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.274 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.867184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.867194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.274 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.867404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.867435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.274 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.867621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.867651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.274 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.867908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.867938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.274 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.868213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.868253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.274 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.868388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.868418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.274 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.868695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.868706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.274 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.868946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.868957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.274 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.869658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.869677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.274 qpair failed and we were unable to recover it. 00:26:57.274 [2024-07-15 22:43:20.869824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.274 [2024-07-15 22:43:20.869834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.870044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.870054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.870242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.870253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.870386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.870398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.870600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.870611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.870789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.870799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.870940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.870950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.871142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.871152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.871400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.871433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.871593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.871608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.871739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.871754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.871896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.871910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.872034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.872048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.872218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.872238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.872447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.872462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.872633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.872648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.872830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.872844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.873050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.873081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.873260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.873292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.873471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.873500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.873749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.873762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.874017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.874036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.874164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.874178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.874326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.874340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.874524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.874539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.874737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.275 [2024-07-15 22:43:20.874751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.275 qpair failed and we were unable to recover it. 00:26:57.275 [2024-07-15 22:43:20.874965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.874999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.875168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.875198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.875392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.875427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.875590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.875620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.875776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.875790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.876019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.876033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.876257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.876288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.876526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.876555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.876731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.876761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.877027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.877057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.877785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.877804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.878005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.878019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.878266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.878281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.878532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.878546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.878681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.878695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.878948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.878962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.879175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.879189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.879374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.879388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.879586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.879599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.879779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.879793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.879932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.879945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.880134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.880148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.880348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.880369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.880500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.880512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.880639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.880649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.880764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.880773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.276 [2024-07-15 22:43:20.880997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.276 [2024-07-15 22:43:20.881006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.276 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.881215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.881230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.881347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.881357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.881491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.881500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.881624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.881634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.881732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.881742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.881857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.881867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.881995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.882005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.882132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.882142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.882351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.882363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.882474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.882484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.882592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.882602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.882775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.882785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.882933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.882943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.883193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.883222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.883473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.883504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.883657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.883687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.883911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.883922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.884096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.884106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.884370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.884381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.884648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.884659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.884783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.884793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.884977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.884987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.885183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.885193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.885315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.885326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.885519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.885529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.885762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.885771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.885888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.885898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.886014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.886025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.886207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.886217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.886344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.277 [2024-07-15 22:43:20.886355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.277 qpair failed and we were unable to recover it. 00:26:57.277 [2024-07-15 22:43:20.886536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.886545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.886738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.886747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.886946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.886956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.887074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.887085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.887208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.887218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.887352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.887363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.887487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.887497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.887670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.887679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.887797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.887806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.887996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.888006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.888276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.888332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.888508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.888538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.888711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.888741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.889093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.889123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.889341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.889372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.889625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.889635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.889820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.889830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.889941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.889951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.890191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.890201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.890417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.890430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.890562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.890572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.890764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.890794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.890958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.890987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.891153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.891183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.891412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.891443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.891664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.891693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.891939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.278 [2024-07-15 22:43:20.891969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.278 qpair failed and we were unable to recover it. 00:26:57.278 [2024-07-15 22:43:20.892194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.892241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.892471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.892501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.892730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.892740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.892914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.892924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.893021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.893031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.893231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.893241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.893443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.893453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.893657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.893687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.893905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.893935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.894236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.894278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.894496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.894506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.894706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.894716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.894910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.894920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.895097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.895107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.895308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.895318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.895455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.895466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.895600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.895609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.895731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.895741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.895879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.895891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.896075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.896085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.896205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.896215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.896477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.896512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.896674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.896704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.897012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.897042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.897302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.897333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.897631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.897660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.897892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.897922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.898142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.898171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.279 [2024-07-15 22:43:20.898421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.279 [2024-07-15 22:43:20.898452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.279 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.898739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.898769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.899022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.899052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.899297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.899328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.899552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.899566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.899764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.899778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.899980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.899993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.900191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.900205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.900426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.900440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.900650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.900663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.900861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.900875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.901014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.901027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.901237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.901269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.901498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.901527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.901754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.901784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.902089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.902118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.902407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.902437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.902626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.902656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.902892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.902922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.903108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.903137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.903371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.903402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.903629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.903642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.903783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.903797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.903998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.904028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.904338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.904369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.904596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.904626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.904940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.280 [2024-07-15 22:43:20.904970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.280 qpair failed and we were unable to recover it. 00:26:57.280 [2024-07-15 22:43:20.905221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.905260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.905482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.905512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.905814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.905843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.906155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.906189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.906407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.906421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.906650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.906664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.906806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.906820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.907017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.907030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.907165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.907179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.907327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.907341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.907591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.907624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.907868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.907898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.908081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.908111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.908343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.908375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.908591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.908605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.908878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.908892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.909020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.909034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.909284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.909315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.909622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.909651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.909887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.909917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.910120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.910150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.910382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.910412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.910624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.910638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.910909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.910922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.911116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.911129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.911330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.911345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.911496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.911510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.281 qpair failed and we were unable to recover it. 00:26:57.281 [2024-07-15 22:43:20.911724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.281 [2024-07-15 22:43:20.911754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.911989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.912019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.912261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.912293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.912520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.912534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.912735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.912748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.912941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.912954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.913178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.913207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.913453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.913484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.913704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.913733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.913949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.913978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.914287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.914317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.914580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.914610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.914776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.914814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.915041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.915055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.915257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.915271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.915526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.915539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.915734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.915749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.916001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.916031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.916260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.916290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.916471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.916500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.916783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.916813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.916998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.917027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.917249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.917280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.917544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.917557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.917805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.917818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.918094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.918108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.918289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.918303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.918550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.282 [2024-07-15 22:43:20.918563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.282 qpair failed and we were unable to recover it. 00:26:57.282 [2024-07-15 22:43:20.918760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.918774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.918999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.919029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.919346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.919378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.919687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.919700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.919819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.919832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.920030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.920068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.920376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.920407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.920652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.920690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.920875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.920888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.921016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.921029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.921282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.921300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.921579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.921609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.921913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.921942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.922174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.922203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.922545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.922576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.922748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.922778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.922998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.923028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.923311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.923341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.923671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.923701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.923927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.923957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.924181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.924210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.924448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.924479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.924651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.924681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.924977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.924990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.925204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.925218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.925368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.925381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.283 qpair failed and we were unable to recover it. 00:26:57.283 [2024-07-15 22:43:20.925649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.283 [2024-07-15 22:43:20.925663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.925796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.925810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.926006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.926023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.926216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.926234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.926369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.926383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.926597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.926626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.926937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.926966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.927216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.927256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.927540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.927569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.927798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.927811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.927931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.927945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.928203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.928250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.928403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.928433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.928737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.928766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.928986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.929015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.929344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.929375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.929556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.929570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.929761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.929790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.930100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.930129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.930316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.930346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.930566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.930595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.930757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.930786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.931090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.931120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.931284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.931315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.931541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.931570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.931740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.931770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.931997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.932026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.932310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.932340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.932591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.932605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.932790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.932804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.284 qpair failed and we were unable to recover it. 00:26:57.284 [2024-07-15 22:43:20.933005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.284 [2024-07-15 22:43:20.933019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.933117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.933129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.933334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.933348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.933533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.933562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.933842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.933872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.934124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.934153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.934442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.934472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.934735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.934749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.935012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.935026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.935303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.935317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.935513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.935527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.935658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.935671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.935843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.935862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.936043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.936057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.936260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.936290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.936463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.936493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.936779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.936808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.937059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.937089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.937397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.937427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.937678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.937708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.937942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.937971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.938292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.938322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.938524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.938553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.938761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.938774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.938890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.938904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.939113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.939127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.285 [2024-07-15 22:43:20.939351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.285 [2024-07-15 22:43:20.939365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.285 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.939646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.939660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.939887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.939900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.939995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.940008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.940143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.940157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.940306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.940321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.940618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.940647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.940867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.940896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.941078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.941107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.944518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.944551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.944785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.944814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.945118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.945147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.945384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.945416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.945600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.945629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.945801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.945831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.946081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.946110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.946390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.946420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.946706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.946736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.946898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.946927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.947158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.947187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.947363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.947394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.947630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.947659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.947993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.948022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.286 qpair failed and we were unable to recover it. 00:26:57.286 [2024-07-15 22:43:20.948191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.286 [2024-07-15 22:43:20.948220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.948413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.948442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.948735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.948765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.949063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.949078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.949300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.949314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.949460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.949474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.949720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.949733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.949983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.950013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.950256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.950286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.950454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.950484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.950699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.950713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.950982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.950996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.951206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.951219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.951363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.951376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.951580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.951594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.951784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.951797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.952066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.952079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.952276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.952290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.952498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.952511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.952657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.952670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.952806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.952820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.952968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.287 [2024-07-15 22:43:20.952981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.287 qpair failed and we were unable to recover it. 00:26:57.287 [2024-07-15 22:43:20.953112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.953125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.953305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.953319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.953519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.953533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.953792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.953822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.954074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.954103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.954256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.954286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.954574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.954642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.954898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.954930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.955176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.955207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.955443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.955475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.955686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.955696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.955849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.955859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.956033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.956063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.956248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.956280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.956469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.956499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.956783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.956813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.957040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.957070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.957301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.957332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.957561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.957571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.957813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.957823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.958073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.958084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.958263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.958276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.958473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.958503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.958730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.958761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.958996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.959025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.959257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.959288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.959454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.288 [2024-07-15 22:43:20.959483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.288 qpair failed and we were unable to recover it. 00:26:57.288 [2024-07-15 22:43:20.959716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.959746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.960028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.960057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.960365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.960396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.960638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.960648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.960822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.960833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.960970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.960980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.961268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.961299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.961479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.961509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.961723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.961733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.961871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.961882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.962059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.962083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.962257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.962288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.962507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.962538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.962815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.962825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.963082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.963092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.963270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.963281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.963498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.963528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.963782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.963812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.964046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.964076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.964326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.964356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.964662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.964673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.964886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.964896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.965073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.965083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.965362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.965373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.965664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.965674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.965934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.965944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.966113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.966122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.289 [2024-07-15 22:43:20.966313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.289 [2024-07-15 22:43:20.966324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.289 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.966501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.966511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.966779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.966808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.967128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.967157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.967417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.967448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.967700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.967736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.967980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.967990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.968205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.968217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.968422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.968432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.968561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.968571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.968832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.968841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.969053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.969064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.969347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.969357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.969599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.969608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.969856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.969866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.970040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.970050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.970312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.970322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.970516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.970546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.970833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.970863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.971079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.971108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.971343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.971374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.971608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.971638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.971866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.971895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.972182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.972192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.972431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.972441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.972698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.972708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.972897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.972907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.973087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.973096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.973277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.973288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.973504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.973533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.973767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.973797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.973974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.974005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.974247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.974278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.974567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.290 [2024-07-15 22:43:20.974597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.290 qpair failed and we were unable to recover it. 00:26:57.290 [2024-07-15 22:43:20.974857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.974867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.975104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.975114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.975377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.975388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.975560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.975570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.975847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.975876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.976082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.976112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.976359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.976389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.976708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.976738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.976988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.977018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.977310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.977341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.977650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.977680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.978013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.978042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.978331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.978361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.978667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.978707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.978928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.978958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.979216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.979254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.979521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.979551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.979832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.979862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.980143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.980173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.980498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.980529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.980692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.980721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.981025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.981054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.981361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.981393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.981687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.981716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.982025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.982055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.982364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.982395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.982703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.982732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.983039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.983069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.983313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.983344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.983601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.983630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.983892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.983902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.984106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.984115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.984301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.984332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.984643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.984673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.984911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.984940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.985280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.985311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.291 qpair failed and we were unable to recover it. 00:26:57.291 [2024-07-15 22:43:20.985557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.291 [2024-07-15 22:43:20.985586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.985823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.985852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.986160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.986190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.986504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.986534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.986804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.986814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.987096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.987106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.987292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.987302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.987518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.987528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.987817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.987847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.988154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.988184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.988419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.988449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.988774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.988804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.989051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.989060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.989324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.989334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.989602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.989612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.989882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.989891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.990158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.990168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.990419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.990434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.990630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.990641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.990883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.990893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.991162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.991191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.991512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.991544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.991867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.991896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.992143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.992172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.992544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.992575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.992796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.992826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.993132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.993162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.993423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.993454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.993714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.993744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.994078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.994087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.994281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.994292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.994554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.994564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.994761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.994771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.292 qpair failed and we were unable to recover it. 00:26:57.292 [2024-07-15 22:43:20.994978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.292 [2024-07-15 22:43:20.994988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.995237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.995247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.995491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.995501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.995633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.995644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.995773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.995783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.996029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.996039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.996315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.996336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.996464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.996474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.996749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.996759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.996950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.996980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.997245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.997276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.997515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.997545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.997778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.997788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.998028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.998038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.998168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.998178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.998403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.998433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.998679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.998709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.998992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.999022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.999336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.999346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.999534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.999543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:20.999806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:20.999816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.000072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.000082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.000322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.000332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.000595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.000605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.000873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.000885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.001078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.001088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.001284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.001294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.001472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.001482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.001751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.001761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.001940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.001950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.002162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.002191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.002462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.002492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.002823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.002853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.003024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.003053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.003309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.003340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.003571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.003601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.003855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.003884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.004159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.004169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.004310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.004320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.004583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.004613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.004913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.293 [2024-07-15 22:43:21.004943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.293 qpair failed and we were unable to recover it. 00:26:57.293 [2024-07-15 22:43:21.005257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.005289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.005547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.005576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.005843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.005853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.006113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.006123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.006408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.006418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.006664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.006674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.006800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.006810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.007077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.007087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.007335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.007365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.007626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.007655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.007958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.008025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.008333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.008369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.008597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.008627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.008892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.008923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.009268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.009300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.009518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.009547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.009781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.009811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.010121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.010151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.010458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.010489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.010741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.010770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.011013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.011027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.011311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.011325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.011574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.011588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.011886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.011927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.012261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.012293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.012472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.012502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.012680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.012710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.012938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.012968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.013184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.013198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.013475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.013489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.013753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.013767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.014044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.014057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.014332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.014346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.014626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.014655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.014964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.014993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.015278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.015309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.015640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.015670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.015909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.015939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.016165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.016196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.016585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.016653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.294 qpair failed and we were unable to recover it. 00:26:57.294 [2024-07-15 22:43:21.016995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.294 [2024-07-15 22:43:21.017029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.017323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.017333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.017626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.017655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.017986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.018015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.018306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.018337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.018639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.018669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.018921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.018950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.019191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.019220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.019469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.019499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.019780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.019811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.020199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.020280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.020627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.020660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.020946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.020963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.021245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.021256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.021501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.021511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.021800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.021810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.022044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.022054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.022316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.022326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.022536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.022546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.022726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.022735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.022936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.022966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.023288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.023320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.023537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.023566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.023802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.023831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.024119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.024149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.024438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.024468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.024792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.024822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.025114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.025144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.025433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.025464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.025777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.025807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.026110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.026120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.026303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.026314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.026579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.026589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.026720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.026730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.027027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.027057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.027345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.027375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.027619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.027649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.027936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.027966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.028293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.028336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.028643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.028672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.028983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.295 [2024-07-15 22:43:21.028993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.295 qpair failed and we were unable to recover it. 00:26:57.295 [2024-07-15 22:43:21.029257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.029267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.029456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.029466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.029730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.029740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.030001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.030011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.030213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.030223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.030435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.030445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.030708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.030718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.030935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.030945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.031213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.031223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.031498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.031510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.031777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.031787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.031981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.031991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.032281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.032291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.032412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.032422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.032602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.032611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.032855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.032865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.033147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.033176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.033494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.033525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.033824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.033853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.034114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.034144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.034452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.034483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.034702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.034732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.035066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.035096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.035390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.035400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.035537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.035547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.035811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.035821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.036006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.036016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.036208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.036218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.036492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.036502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.036693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.036703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.036966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.036975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.037170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.037180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.037471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.037481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.037660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.037670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.037952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.037962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.038250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.038260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.296 [2024-07-15 22:43:21.038463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.296 [2024-07-15 22:43:21.038473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.296 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.038659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.038668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.038926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.038955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.039178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.039208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.039454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.039484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.039792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.039821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.040052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.040081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.040450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.040461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.040746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.040755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.040876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.040886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.041132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.041162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.041319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.041350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.041566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.041596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.041833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.041848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.042117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.042127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.042379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.042390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.042578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.042588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.042769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.042779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.043041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.043051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.043245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.043255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.043521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.043531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.043783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.043793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.044059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.044069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.044352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.044362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.044535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.044545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.044835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.044845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.045037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.045047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.045316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.045327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.045596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.045606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.045820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.045829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.046098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.046108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.046305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.046315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.046520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.046530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.046776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.046786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.047031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.047040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.047280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.047290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.047476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.047486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.047667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.047677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.297 [2024-07-15 22:43:21.047888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.297 [2024-07-15 22:43:21.047898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.297 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.048155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.048165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.048348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.048359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.048625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.048634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.048900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.048910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.049199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.049209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.049364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.049374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.049642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.049652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.049866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.049876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.050068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.050078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.050320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.050330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.050520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.050530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.050702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.050712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.050910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.050919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.051053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.051062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.051347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.051359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.051576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.051586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.051847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.051856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.052165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.052174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.052322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.052332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.052559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.052569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.052790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.052800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.053005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.053015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.053197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.053206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.053467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.053477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.053671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.053682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.053937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.053947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.054074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.054084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.054346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.054356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.054633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.054643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.054885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.054895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.055067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.055077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.055264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.055293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.055511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.055540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.055883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.055913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.056151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.056181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.056476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.056506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.056763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.056793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.057150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.057160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.057365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.057375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.057562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.057572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.298 [2024-07-15 22:43:21.057796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.298 [2024-07-15 22:43:21.057825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.298 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.058143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.058173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.058505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.058535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.058840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.058869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.059091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.059101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.059292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.059303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.059543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.059553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.059814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.059824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.060083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.060093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.060335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.060345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.060604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.060614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.060871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.060881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.061110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.061120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.061390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.061401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.061598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.061610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.061791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.061800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.062011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.062020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.062258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.062268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.062513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.062523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.062736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.062746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.062891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.062901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.063077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.063087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.063352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.063363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.063562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.063572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.063868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.063878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.064083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.064093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.064349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.064359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.064544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.064553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.064796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.064806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.064988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.064998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.065190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.065200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.065380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.065390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.065581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.065591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.065719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.065729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.065993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.066003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.066263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.066274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.066475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.066485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.066673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.066683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.066881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.066891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.067066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.067076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.067359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.299 [2024-07-15 22:43:21.067369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.299 qpair failed and we were unable to recover it. 00:26:57.299 [2024-07-15 22:43:21.067565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.067575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.067817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.067827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.068091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.068101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.068277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.068287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.068579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.068589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.068735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.068745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.068939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.068949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.069141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.069151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.069284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.069295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.069564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.069574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.069830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.069840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.070032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.070042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.070308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.070318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.070569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.070581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.070826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.070836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.071100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.071109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.071364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.071374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.071575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.071585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.071843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.071853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.071976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.071986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.072176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.072186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.072365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.072376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.072551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.072561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.072736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.072746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.072959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.072969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.073143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.073153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.073413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.073423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.073559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.073570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.073817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.073827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.074018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.074027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.074342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.074353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.074609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.074619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.074749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.074759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.074945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.074955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.075145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.075155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.075419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.075429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.075646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.075656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.075934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.075944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.076157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.076167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.076344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.076355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.300 [2024-07-15 22:43:21.076566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.300 [2024-07-15 22:43:21.076575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.300 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.076773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.076783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.077073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.077083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.077370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.077380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.077567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.077577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.077783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.077793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.078044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.078054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.078257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.078287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.078569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.078599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.078898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.078927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.079260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.079292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.079573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.079583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.079828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.079838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.080129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.080141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.080329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.080340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.080512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.080522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.080701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.080711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.080844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.080854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.081174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.081203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.081516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.081547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.081779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.081809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.082139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.082149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.082333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.082343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.082611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.082641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.082858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.082887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.083192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.083221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.083538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.083568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.083885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.083895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.084181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.084191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.084375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.084386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.084642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.084652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.084920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.084929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.085200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.085210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.085484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.085494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.085784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.301 [2024-07-15 22:43:21.085794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.301 qpair failed and we were unable to recover it. 00:26:57.301 [2024-07-15 22:43:21.085924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.085934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.086198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.086208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.086498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.086508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.086769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.086780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.086891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.086901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.087107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.087136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.087420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.087451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.087753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.087783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.088095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.088125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.088436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.088446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.088602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.088613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.088872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.088882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.089058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.089068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.089259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.089270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.089540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.089569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.089818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.089848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.090100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.090110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.090287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.090297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.090543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.090557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.090825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.090835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.091097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.091106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.091354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.091364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.091613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.091623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.091907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.091917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.092123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.092133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.092321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.092332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.092610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.092620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.092913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.092923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.093126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.093136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.093378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.093388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.093644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.093653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.093916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.093926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.094106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.094116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.094398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.094408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.094603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.094613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.094856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.094866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.095038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.095048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.095274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.095284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.095531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.095540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.095804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.095815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.302 qpair failed and we were unable to recover it. 00:26:57.302 [2024-07-15 22:43:21.096064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.302 [2024-07-15 22:43:21.096074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.096288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.096298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.096560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.096571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.096791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.096802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.097056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.097066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.097301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.097325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.097591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.097605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.097861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.097875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.098013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.098027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.098183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.098197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.098417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.098432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.098707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.098720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.098918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.098932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.099182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.099195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.099418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.099432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.099577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.099591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.099862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.099876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.100093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.100107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.100336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.100350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.100625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.100639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.100829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.100843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.101039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.101053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.101173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.101188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.101436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.101450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.101601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.101615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.101798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.101811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.102033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.102063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.102359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.102390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.102629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.102658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.102917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.102948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.103172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.103185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.103321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.103335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.103543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.103556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.103687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.103697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.103837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.103847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.104029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.104038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.104306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.104338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.104623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.104653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.104876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.104905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.105149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.105179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.105471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.303 [2024-07-15 22:43:21.105482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.303 qpair failed and we were unable to recover it. 00:26:57.303 [2024-07-15 22:43:21.105668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.105677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.105851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.105861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.106049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.106079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.106326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.106356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.106641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.106671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.106882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.106912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.107210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.107220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.107415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.107425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.107664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.107674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.107847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.107857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.108055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.108065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.108337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.108368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.108650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.108679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.109009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.109039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.109332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.109372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.109555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.109565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.109858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.109867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.110055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.110066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.110322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.110332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.110572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.110582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.110848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.110858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.111117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.111127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.111302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.111313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.111555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.111566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.111781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.111791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.112088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.112098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.112375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.112385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.112640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.112650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.112909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.112918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.113161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.113171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.113364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.113374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.113640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.113652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.113893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.113903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.114153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.114163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.114400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.114410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.114653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.114663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.114913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.114923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.115134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.115143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.115406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.115417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.115590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.115600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.304 [2024-07-15 22:43:21.115808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.304 [2024-07-15 22:43:21.115818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.304 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.116007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.116017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.116209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.116219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.116483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.116494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.116781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.116791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.116969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.116979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.117173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.117183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.117438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.117449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.117621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.117631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.117872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.117882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.118140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.118150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.118331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.118342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.118536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.118546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.118819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.118829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.119043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.119053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.119292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.119302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.119510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.119520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.119786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.119796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.120008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.120018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.120221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.120235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.120430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.120440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.120625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.120635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.120900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.120910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.121192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.121203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.121442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.121452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.121696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.121706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.121976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.121986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.122234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.122244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.122420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.122430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.122670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.122680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.122922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.122931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.123143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.123155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.123365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.123375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.123618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.123628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.305 [2024-07-15 22:43:21.123877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.305 [2024-07-15 22:43:21.123887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.305 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.124068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.124078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.124298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.124308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.124576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.124586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.124778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.124788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.125030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.125040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.125316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.125326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.125541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.125551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.125727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.125737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.125877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.125887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.126164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.126175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.126448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.126458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.126724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.126734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.126974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.126983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.127243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.127254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.127518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.127528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.127716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.127726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.128014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.128024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.128279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.128289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.128553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.128563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.128837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.128847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.129039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.129049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.129268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.129278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.129467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.129477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.129742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.129753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.130020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.130030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.130209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.130219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.130451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.130462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.130594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.130604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.130886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.130896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.131153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.131163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.131339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.131350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.131619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.131629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.131924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.131933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.132189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.132199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.132440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.132450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.132717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.132727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.132924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.132937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.133215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.133228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.133435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.133445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.133555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.133566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.306 qpair failed and we were unable to recover it. 00:26:57.306 [2024-07-15 22:43:21.133815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.306 [2024-07-15 22:43:21.133825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.134065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.134074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.134364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.134374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.134546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.134557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.134741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.134752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.135041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.135051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.135260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.135270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.135529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.135539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.135826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.135836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.136031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.136040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.136219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.136233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.136547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.136557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.136756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.136766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.136952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.136962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.137157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.137167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.137306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.137317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.137423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.137432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.137676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.137686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.137864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.137874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.138135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.138145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.138422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.138433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.138649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.138658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.138834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.138844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.138979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.138989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.139237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.139248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.139506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.139516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.139689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.139699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.139912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.139941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.140241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.140272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.140503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.140532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.140702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.140732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.140900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.140929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.141233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.141243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.141418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.141428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.141613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.141623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.141905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.141915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.142099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.142111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.142397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.142408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.142622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.142632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.142824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.142834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.143025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.307 [2024-07-15 22:43:21.143035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.307 qpair failed and we were unable to recover it. 00:26:57.307 [2024-07-15 22:43:21.143299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.143309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.143504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.143513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.143714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.143743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.143962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.143991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.144205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.144243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.144521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.144531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.144818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.144828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.145095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.145105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.145420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.145450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.145780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.145809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.146097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.146126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.146449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.146460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.146757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.146766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.146989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.146999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.147195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.147205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.147482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.147492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.147773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.147783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.147957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.147967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.148236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.148246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.148462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.148472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.148674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.148684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.148962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.148972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.149184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.149194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.149462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.149472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.149743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.149753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.149993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.150002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.150255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.150265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.150466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.150476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.150686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.150696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.150936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.150946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.151197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.151206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.151477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.151487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.151683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.151693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.151905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.151915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.152155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.152165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.152427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.152439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.152612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.152623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.152819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.152848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.153154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.153183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.153406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.153416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.153718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.308 [2024-07-15 22:43:21.153728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.308 qpair failed and we were unable to recover it. 00:26:57.308 [2024-07-15 22:43:21.153969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.153979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.154222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.154236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.154492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.154502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.154691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.154701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.154899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.154928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.155176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.155206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.155558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.155589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.155845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.155875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.156108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.156118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.156393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.156403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.156529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.156539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.156735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.156744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.156960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.156970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.157217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.157230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.157474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.157484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.157750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.157760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.157881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.157891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.158064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.158074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.158316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.158326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.158563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.158573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.158763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.158773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.159016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.159027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.159149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.159159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.159365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.159375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.159556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.159566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.159818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.159828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.160083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.160093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.160349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.160360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.160551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.160561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.160823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.160832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.161017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.161027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.161296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.161326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.161601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.161630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.161850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.309 [2024-07-15 22:43:21.161879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.309 qpair failed and we were unable to recover it. 00:26:57.309 [2024-07-15 22:43:21.162189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.162234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.162482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.162493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.162610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.162619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.162831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.162860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.163189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.163218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.163555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.163597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.163855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.163885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.164133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.164162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.164458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.164469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.164729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.164739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.164990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.164999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.165173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.165183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.165422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.165452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.165758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.165788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.166096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.166125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.166433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.166464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.166722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.166752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.167010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.167039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.167345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.167376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.167696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.167725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.168027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.168056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.168347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.168378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.168622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.168652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.168935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.168965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.169282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.169302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.169609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.169638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.169855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.169884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.170178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.170207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.170524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.170554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.170857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.170886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.171199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.171251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.171514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.171543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.171828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.171857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.172168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.172198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.172502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.172533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.172845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.172874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.173139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.173169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.173407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.173438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.173726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.173755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.174053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.174082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.174327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.310 [2024-07-15 22:43:21.174364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.310 qpair failed and we were unable to recover it. 00:26:57.310 [2024-07-15 22:43:21.174666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.174676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.174811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.174821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.175037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.175047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.175338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.175369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.175590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.175620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.175938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.175967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.176295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.176326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.176608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.176618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.176807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.176816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.177055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.177065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.177267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.177298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.177598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.177628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.177947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.177976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.178206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.178246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.178415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.178425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.178674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.178703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.179011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.179041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.179346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.179377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.179687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.179716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.179899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.179929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.180163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.180193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.180503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.180513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.180695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.180705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.180840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.180850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.181135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.181164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.181505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.181536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.181776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.181806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.182021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.182050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.182359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.182390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.182694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.182723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.182952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.182981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.183264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.183294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.183599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.183608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.183931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.183941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.184050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.184060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.184328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.184338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.184529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.184549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.184760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.184770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.185010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.185020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.185242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.185278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.311 [2024-07-15 22:43:21.185555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.311 [2024-07-15 22:43:21.185585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.311 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.185897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.185927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.186239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.186270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.186558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.186568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.186848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.186858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.187120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.187131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.187423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.187454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.187748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.187778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.188006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.188035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.188338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.188369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.188677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.188707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.188945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.188974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.189221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.189259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.189551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.189581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.189926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.189955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.190241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.190282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.190480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.190491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.190760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.190771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.191056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.191066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.191346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.191356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.191558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.191568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.191778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.191788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.191981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.191991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.192180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.192190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.192433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.192443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.192778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.192808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.193125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.193193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.193478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.193494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.193747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.193761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.194034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.194048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.194327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.194358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.194667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.194697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.194993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.195023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.195351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.195381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.195674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.195704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.196026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.196056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.196353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.196383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.196639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.196669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.196963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.196993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.197320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.197358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.197617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.197630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.312 qpair failed and we were unable to recover it. 00:26:57.312 [2024-07-15 22:43:21.197826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.312 [2024-07-15 22:43:21.197840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.198057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.198071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.198324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.198355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.198682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.198711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.199001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.199031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.199363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.199393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.199674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.199687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.199898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.199911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.200207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.200220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.200523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.200537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.200811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.200825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.201023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.201037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.201318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.201332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.201548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.201561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.201748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.201761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.201990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.202019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.202192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.202222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.202549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.202579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.202874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.202903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.203146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.203176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.203489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.203519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.203829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.203859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.204109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.204138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.204451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.204465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.204696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.204710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.204937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.204951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.205229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.205242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.205425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.205439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.205740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.205753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.205964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.205977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.206180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.206193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.206421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.206451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.206637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.206667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.206963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.206993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.207307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.207337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.207568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.207598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.207858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.207887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.208122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.208151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.208480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.208516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.208802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.208831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.209136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.209165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.313 qpair failed and we were unable to recover it. 00:26:57.313 [2024-07-15 22:43:21.209474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.313 [2024-07-15 22:43:21.209504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.209808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.209838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.210098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.210127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.210412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.210442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.210727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.210756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.211089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.211118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.211405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.211436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.211765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.211794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.212087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.212116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.212340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.212371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.212701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.212715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.212909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.212923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.213105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.213118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.213376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.213390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.213588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.213602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.213806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.213820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.214109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.214138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.214398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.214430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.214735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.214748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.215062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.215075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.215218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.215236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.215513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.215526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.215710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.215724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.216021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.216034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.216272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.216306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.216472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.216499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.216782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.216793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.217056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.217067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.217316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.217327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.217578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.217588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.217760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.217770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.218029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.218059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.218328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.218358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.218660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.218691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.218983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.314 [2024-07-15 22:43:21.219013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.314 qpair failed and we were unable to recover it. 00:26:57.314 [2024-07-15 22:43:21.219315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.219325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-15 22:43:21.219567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.219578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-15 22:43:21.219828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.219842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-15 22:43:21.220108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.220118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-15 22:43:21.220372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.220382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-15 22:43:21.220612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.220622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-15 22:43:21.220870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.220880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-15 22:43:21.221014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.221024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-15 22:43:21.221212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.221266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-15 22:43:21.221505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.221535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-15 22:43:21.221859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.221888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-15 22:43:21.222116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.222145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.315 [2024-07-15 22:43:21.222318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.315 [2024-07-15 22:43:21.222349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.315 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.222679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.222710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.222950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.222981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.223214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.223252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.223507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.223538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.223827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.223856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.224093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.224123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.224381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.224411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.224551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.224561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.224795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.224825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.225051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.225080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.225362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.225372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.225583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.225612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.225924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.225954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.226279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.226311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.226539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.226581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.226869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.226879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.227138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.227148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.227339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.227350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.227537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.592 [2024-07-15 22:43:21.227547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.592 qpair failed and we were unable to recover it. 00:26:57.592 [2024-07-15 22:43:21.227737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.227747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.228027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.228057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.228340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.228369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.228659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.228669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.228910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.228920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.229160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.229170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.229358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.229368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.229581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.229610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.229838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.229867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.230156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.230186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.230514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.230551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.230863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.230892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.231195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.231233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.231551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.231581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.231870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.231900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.232204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.232251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.232513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.232542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.232852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.232881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.233139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.233168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.233481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.233512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.233730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.233760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.234055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.234084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.234335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.234367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.234603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.234612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.234909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.234919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.235196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.235206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.235411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.235421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.235600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.235611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.235856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.235885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.236176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.236205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.236511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.236542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.236769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.236798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.237082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.237111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.237427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.237458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.237757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.237787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.238025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.238054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.238288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.238332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.238602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.238612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.238802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.238812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.239075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.593 [2024-07-15 22:43:21.239085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.593 qpair failed and we were unable to recover it. 00:26:57.593 [2024-07-15 22:43:21.239282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.239313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.239598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.239627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.239934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.239963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.240274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.240304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.240612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.240642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.240948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.240978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.241259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.241290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.241622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.241651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.241869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.241898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.242220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.242261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.242563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.242592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.242920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.242950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.243165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.243195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.243524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.243592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.243859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.243892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.244199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.244241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.244529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.244560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.244877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.244908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.245149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.245178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.245490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.245504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.245755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.245773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.246054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.246067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.246216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.246234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.246495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.246524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.246868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.246899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.247208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.247247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.247546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.247576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.247888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.247918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.248221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.248261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.248495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.248524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.248755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.248785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.249029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.249058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.249382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.249422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.249698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.249711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.249979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.249992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.250241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.250255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.250462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.250476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.250760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.250776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.251036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.251050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.251330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.251344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.594 qpair failed and we were unable to recover it. 00:26:57.594 [2024-07-15 22:43:21.251494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.594 [2024-07-15 22:43:21.251508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.251760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.251790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.252028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.252058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.252340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.252370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.252651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.252680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.253009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.253038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.253326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.253357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.253591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.253621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.253881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.253910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.254161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.254191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.254448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.254462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.254664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.254678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.254904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.254918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.255121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.255150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.255321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.255353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.255589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.255618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.255837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.255850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.256060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.256074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.256336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.256366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.256672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.256701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.256943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.256972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.257153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.257183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.257364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.257378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.257576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.257605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.257849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.257879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.258189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.258218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.258467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.258481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.258774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.258787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.259081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.259123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.259455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.259487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.259772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.259802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.260135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.260164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.260496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.260527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.260763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.260792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.261100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.261130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.261377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.261415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.261690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.261703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.261969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.261991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.262274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.262289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.262567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.262582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.595 qpair failed and we were unable to recover it. 00:26:57.595 [2024-07-15 22:43:21.262855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.595 [2024-07-15 22:43:21.262869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.263084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.263098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.263318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.263333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.263533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.263562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.263862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.263891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.264127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.264157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.264462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.264492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.264715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.264729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.265001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.265015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.265304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.265318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.265518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.265532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.265668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.265682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.265977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.265991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.266305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.266336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.266654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.266683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.266924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.266955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.267284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.267315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.267564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.267594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.267904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.267934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.268172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.268202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.268537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.268551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.268801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.268815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.269008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.269022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.269319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.269334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.269539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.269552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.269803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.269818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.270121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.270151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.270482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.270514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.270737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.270767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.271140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.271169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.271502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.271534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.271733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.271762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.596 [2024-07-15 22:43:21.272116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.596 [2024-07-15 22:43:21.272145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.596 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.272493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.272524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.272831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.272861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.273079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.273108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.273425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.273460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.273662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.273679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.273833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.273847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.274047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.274077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.274363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.274395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.274575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.274605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.274909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.274940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.275256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.275287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.275591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.275620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.275909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.275938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.276117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.276147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.276359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.276374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.276522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.276536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.276789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.276819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.277160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.277189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.277514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.277528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.277738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.277752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.278004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.278018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.278213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.278232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.278482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.278495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.278788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.278801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.279047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.279061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.279361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.279375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.279573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.279587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.279849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.279863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.280089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.280103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.280257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.280271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.280529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.280543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.280844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.280858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.281131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.281145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.281348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.281362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.281557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.281570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.281843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.281856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.282066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.282080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.282263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.282277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.282555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.282569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.282822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.597 [2024-07-15 22:43:21.282836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.597 qpair failed and we were unable to recover it. 00:26:57.597 [2024-07-15 22:43:21.283060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.283075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.283266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.283280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.283467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.283480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.283705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.283718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.284001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.284018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.284277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.284292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.284566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.284579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.284803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.284816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.285086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.285100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.285304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.285318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.285469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.285483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.285683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.285697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.285820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.285833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.286046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.286060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.286255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.286269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.286491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.286504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.286715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.286729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.286947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.286961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.287236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.287250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.287498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.287512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.287646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.287660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.287870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.287884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.288093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.288107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.288251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.288265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.288462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.288476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.288690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.288704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.288831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.288844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.289100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.289129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.289311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.289342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.289523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.289553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.289730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.289744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.289931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.289945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.290133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.290147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.290339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.290353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.290551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.290565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.290693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.290707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.290934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.290947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.291074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.291088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.291275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.291289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.291491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.291504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.598 qpair failed and we were unable to recover it. 00:26:57.598 [2024-07-15 22:43:21.291645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.598 [2024-07-15 22:43:21.291659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.291793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.291807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.292004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.292017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.292158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.292171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.292367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.292384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.292523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.292537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.292721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.292735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.292995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.293025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.293194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.293232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.293414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.293444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.293627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.293657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.293813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.293827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.294085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.294099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.294247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.294261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.294536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.294566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.294737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.294767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.295000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.295030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.295193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.295223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.295420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.295450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.295697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.295711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.295959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.295973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.296120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.296133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.296265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.296279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.296474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.296487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.296714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.296728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.296868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.296883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.297064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.297078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.297267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.297281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.297479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.297492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.297682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.297696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.297826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.297840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.298038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.298052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.298249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.298263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.298510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.298524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.298717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.298731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.298955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.298969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.299112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.299126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.299329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.299343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.299537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.299551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.299716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.299730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.599 [2024-07-15 22:43:21.299982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.599 [2024-07-15 22:43:21.299996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.599 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.300185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.300199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.300413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.300426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.300704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.300718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.300918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.300934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.301081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.301094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.301242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.301256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.301348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.301360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.301557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.301570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.301832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.301846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.302051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.302080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.302273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.302305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.302611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.302641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.302816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.302845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.303001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.303030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.303195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.303232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.303468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.303482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.303665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.303678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.303796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.303810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.304027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.304040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.304303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.304347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.304503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.304532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.304767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.304797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.304988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.305002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.305277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.305291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.305492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.305506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.305764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.305777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.305962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.305976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.306241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.306271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.306509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.306538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.306728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.306742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.306890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.306903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.307182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.307196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.307487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.307502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.307705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.307719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.307928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.307942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.308127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.308140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.308337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.308351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.308603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.308632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.308878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.308907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.600 [2024-07-15 22:43:21.309144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.600 [2024-07-15 22:43:21.309174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.600 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.309299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.309330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.309615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.309644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.309928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.309957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.310252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.310293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.310471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.310500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.310748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.310778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.311007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.311021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.311268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.311282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.311483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.311496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.311756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.311769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.311893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.311906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.312145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.312175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.312492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.312523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.312747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.312761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.313035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.313049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.313277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.313290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.313503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.313532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.313718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.313748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.313977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.314007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.314246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.314277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.314505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.314534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.314771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.314785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.314931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.314945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.315193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.315207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.315352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.315366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.315643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.315673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.315920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.315949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.316190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.316219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.316382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.316412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.601 qpair failed and we were unable to recover it. 00:26:57.601 [2024-07-15 22:43:21.316659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.601 [2024-07-15 22:43:21.316672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.316939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.316965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.317149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.317161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.317340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.317351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.317529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.317540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.317671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.317681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.317808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.317818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.318087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.318097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.318294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.318304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.318584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.318594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.318784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.318793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.318918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.318928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.319147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.319157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.319449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.319460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.319715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.319728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.319964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.319994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.320303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.320333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.320637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.320647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.320939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.320948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.321155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.321165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.321371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.321381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.321572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.321582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.321847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.321877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.322105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.322134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.322437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.322447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.322644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.322674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.322982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.323011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.323339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.323371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.323662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.323692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.324015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.324044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.324342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.324373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.324677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.324687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.324958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.324968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.325150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.325160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.325430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.325461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.325756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.325786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.326100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.326110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.326294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.326304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.602 qpair failed and we were unable to recover it. 00:26:57.602 [2024-07-15 22:43:21.326493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.602 [2024-07-15 22:43:21.326503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.326787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.326816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.326984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.327014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.327420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.327488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.327701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.327735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.328063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.328077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.328341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.328355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.328475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.328488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.328675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.328688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.328997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.329010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.329290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.329303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.329512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.329525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.329678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.329692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.329911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.329924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.330188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.330201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.330480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.330494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.330677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.330695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.330920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.330933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.331209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.331223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.331374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.331388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.331601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.331615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.331888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.331901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.332151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.332164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.332392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.332406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.332660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.332690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.332989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.333019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.333241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.333272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.333488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.333503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.333722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.333735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.333981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.333994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.334272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.334286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.334546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.334559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.334708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.334722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.334997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.335026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.335321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.335351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.335577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.335606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.335809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.335822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.336093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.336107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.336304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.336318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.336433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.336447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.336584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.603 [2024-07-15 22:43:21.336597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.603 qpair failed and we were unable to recover it. 00:26:57.603 [2024-07-15 22:43:21.336862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.336876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.337130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.337142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.337386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.337398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.337603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.337614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.337739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.337749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.337861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.337871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.338059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.338069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.338281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.338292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.338430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.338440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.338706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.338716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.338977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.338988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.339242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.339252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.339516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.339527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.339768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.339778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.339978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.339987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.340191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.340203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.340343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.340353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.340570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.340580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.340711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.340721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.340846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.340856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.341097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.341107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.341396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.341407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.341591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.341601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.341747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.341757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.342027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.342037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.342315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.342325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.342504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.342513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.342799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.342809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.343025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.343034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.343231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.343242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.343507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.343517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.343777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.343787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.343974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.343984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.344246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.344256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.344496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.344506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.344702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.344711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.344914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.344924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.345113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.345124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.345250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.345261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.345459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.345468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.604 qpair failed and we were unable to recover it. 00:26:57.604 [2024-07-15 22:43:21.345603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.604 [2024-07-15 22:43:21.345612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.345852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.345862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.346105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.346115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.346334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.346344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.346483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.346493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.346680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.346689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.346900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.346910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.347088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.347098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.347412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.347422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.347662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.347672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.347800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.347810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.347996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.348006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.348268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.348278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.348417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.348426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.348641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.348651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.348911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.348923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.349110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.349120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.349397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.349408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.349539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.349549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.349677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.349687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.349947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.349958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.350132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.350141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.350274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.350284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.350548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.350558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.350849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.350859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.350977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.350987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.351253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.351283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.351565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.351596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.351850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.351860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.352124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.352134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.352399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.352410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.352676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.352686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.353002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.353011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.353297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.605 [2024-07-15 22:43:21.353308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.605 qpair failed and we were unable to recover it. 00:26:57.605 [2024-07-15 22:43:21.353500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.353509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.353732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.353742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.354015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.354025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.354231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.354242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.354446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.354456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.354722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.354732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.354929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.354939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.355182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.355192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.355514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.355530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.355814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.355845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.356077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.356106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.356398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.356429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.356718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.356731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.357004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.357017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.357214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.357232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.357520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.357534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.357719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.357733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.358065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.358106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.358417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.358448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.358701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.358730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.358993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.359022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.359330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.359374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.359593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.359607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.359795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.359808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.360106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.360119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.360345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.360359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.360493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.360507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.360768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.360797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.361117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.361146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.361379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.361409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.361624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.361654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.361883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.361897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.362165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.362179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.362377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.362391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.362668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.362681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.362924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.362938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.363213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.363230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.363507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.363521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.363799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.363829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.364115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.606 [2024-07-15 22:43:21.364144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.606 qpair failed and we were unable to recover it. 00:26:57.606 [2024-07-15 22:43:21.364457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.364489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.364720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.364748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.364985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.364999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.365253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.365284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.365604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.365633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.365915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.365944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.366253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.366284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.366519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.366548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.366930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.366996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.367321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.367358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.367596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.367627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.367853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.367883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.368111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.368141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.368327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.368357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.368585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.368616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.368844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.368875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.369182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.369211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.369450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.369494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.369626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.369640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.369838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.369879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.370185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.370215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.370522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.370553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.370866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.370895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.371233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.371264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.371570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.371600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.371902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.371932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.372174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.372204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.372565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.372596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.372831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.372876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.373152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.373166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.373355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.373370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.373574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.373604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.373844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.373873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.374091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.374120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.374341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.374371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.374648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.374665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.374851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.374865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.375137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.375151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.375373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.375387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.375590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.375604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.607 qpair failed and we were unable to recover it. 00:26:57.607 [2024-07-15 22:43:21.375817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.607 [2024-07-15 22:43:21.375830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.376122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.376136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.376388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.376402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.376616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.376630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.376887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.376901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.377176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.377190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.377422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.377437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.377703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.377717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.377988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.378002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.378204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.378218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.378415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.378429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.378680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.378710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.378928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.378957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.379249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.379280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.379537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.379567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.379908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.379938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.380244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.380275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.380561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.380591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.380922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.380952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.381259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.381290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.381550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.381579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.381834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.381864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.382085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.382120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.382430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.382461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.382635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.382665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.382992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.383030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.383324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.383355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.383665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.383695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.384000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.384029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.384279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.384310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.384627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.384657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.384952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.384965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.385247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.385261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.385391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.385405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.385675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.385689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.385898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.385928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.386248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.386279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.386576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.386605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.386868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.386898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.387134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.387165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.387491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.608 [2024-07-15 22:43:21.387526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.608 qpair failed and we were unable to recover it. 00:26:57.608 [2024-07-15 22:43:21.387800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.387815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.388016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.388030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.388210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.388223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.388478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.388492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.388801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.388815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.389040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.389054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.389322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.389336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.389583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.389596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.389811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.389825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.390120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.390134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.390348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.390363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.390612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.390626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.390896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.390910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.391025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.391039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.391319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.391334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.391581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.391594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.391780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.391794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.391972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.391987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.392250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.392280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.392516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.392546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.392845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.392859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.393173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.393186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.393336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.393368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.393602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.393632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.393870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.393900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.394199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.394236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.394487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.394517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.394821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.394835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.395034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.395048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.395327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.395341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.395541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.395555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.395678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.395692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.395962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.395999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.396297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.396328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.396564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.396593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.396844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.396874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.397136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.397150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.397424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.397438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.397625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.397638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.397820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.397848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.609 qpair failed and we were unable to recover it. 00:26:57.609 [2024-07-15 22:43:21.398156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.609 [2024-07-15 22:43:21.398185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.398505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.398535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.398833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.398862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.399095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.399125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.399379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.399410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.399607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.399637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.399941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.399954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.400173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.400186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.400454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.400468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.400596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.400612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.400793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.400820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.401131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.401161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.401467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.401499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.401805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.401835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.402148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.402179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.402487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.402519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.402809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.402823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.403070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.403084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.403282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.403296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.403614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.403628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.403925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.403938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.404141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.404155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.404416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.404430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.404684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.404698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.404813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.404827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.405082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.405112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.405423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.405454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.405680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.405694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.405990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.406004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.406186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.406200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.406457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.406472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.406741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.406756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.406973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.406987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.407210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.407229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.407439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.407452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.610 qpair failed and we were unable to recover it. 00:26:57.610 [2024-07-15 22:43:21.407727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.610 [2024-07-15 22:43:21.407757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.408047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.408085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.408306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.408349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.408637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.408667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.408906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.408919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.409100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.409113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.409392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.409424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.409778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.409807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.410159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.410189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.410506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.410537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.410831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.410844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.411045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.411059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.411258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.411272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.411492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.411506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.411755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.411769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.412023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.412036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.412294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.412308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.412549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.412563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.412758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.412772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.412961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.412975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.413249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.413263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.413559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.413573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.413862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.413892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.414223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.414270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.414447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.414477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.414773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.414812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.415008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.415023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.415175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.415189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.415443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.415481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.415704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.415734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.416039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.416069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.416379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.416410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.416707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.416736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.417054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.417084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.417416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.417447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.417626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.417655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.417941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.417971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.418281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.611 [2024-07-15 22:43:21.418312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.611 qpair failed and we were unable to recover it. 00:26:57.611 [2024-07-15 22:43:21.418596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.418625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.418948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.418962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.419269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.419300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.419540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.419570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.419881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.419918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.420138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.420152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.420353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.420366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.420652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.420665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.420865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.420879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.421172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.421186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.421457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.421471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.421749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.421763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.421981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.421994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.422270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.422284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.422495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.422509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.422703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.422717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.422969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.422998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.423305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.423336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.423647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.423677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.423924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.423954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.424264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.424278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.424550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.424564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.424842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.424872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.425179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.425209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.425505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.425536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.425854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.425884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.426174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.426188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.426322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.426337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.426557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.426571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.426756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.426769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.426977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.426991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.427246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.427261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.427547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.427561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.427815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.427829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.428104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.428118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.428364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.428378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.428658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.428672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.428925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.428938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.429085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.429098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.429234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.429249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.612 [2024-07-15 22:43:21.429496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.612 [2024-07-15 22:43:21.429510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.612 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.429658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.429671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.429893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.429922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.430205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.430263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.430572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.430602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.430914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.430944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.431251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.431282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.431592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.431622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.431861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.431891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.432215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.432253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.432581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.432611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.432921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.432951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.433262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.433293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.433553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.433582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.433831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.433845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.434116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.434130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.434376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.434390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.434582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.434596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.434794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.434810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.435037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.435067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.435298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.435330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.435658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.435688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.435914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.435943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.436172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.436202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.436515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.436547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.436849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.436878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.437187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.437200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.437404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.437418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.437616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.437630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.437886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.437900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.438108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.438121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.438328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.438342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.438617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.438631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.438844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.438858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.439085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.439099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.439399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.439413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.439559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.439572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.439845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.439874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.440090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.440120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.613 [2024-07-15 22:43:21.440380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.613 [2024-07-15 22:43:21.440410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.613 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.440640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.440670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.440890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.440919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.441056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.441070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.441347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.441361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.441570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.441584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.441780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.441796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.442067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.442081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.442201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.442215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.442387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.442401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.442703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.442732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.443025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.443055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.443284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.443316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.443623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.443652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.443935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.443964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.444118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.444148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.444395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.444426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.444730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.444759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.444995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.445025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.445237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.445252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.445526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.445539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.445721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.445735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.446046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.446076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.446410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.446441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.446748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.446777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.446959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.446988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.447212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.447253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.447489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.447519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.447825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.447854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.448206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.448247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.448500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.448530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.448836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.448866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.449141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.449155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.449403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.449417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.449615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.614 [2024-07-15 22:43:21.449629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.614 qpair failed and we were unable to recover it. 00:26:57.614 [2024-07-15 22:43:21.449925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.449938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.450209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.450223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.450493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.450507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.450730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.450744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.450890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.450904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.451177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.451207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.451561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.451592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.451876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.451905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.452243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.452275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.452439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.452468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.452638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.452668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.452898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.452928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.453214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.453256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.453499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.453529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.453836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.453866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.454178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.454208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.454512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.454542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.454849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.454879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.455154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.455167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.455299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.455314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.455535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.455549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.455733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.455747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.455994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.456008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.456282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.456296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.456559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.456573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.456848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.456864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.457119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.457133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.457315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.457328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.457579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.457592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.457869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.457882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.458152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.458166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.458368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.458382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.458597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.458611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.458749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.458762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.458970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.459000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.459169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.459199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.459489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.459558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.459890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.459922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.460247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.460280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.460569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.460608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.615 [2024-07-15 22:43:21.460859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.615 [2024-07-15 22:43:21.460873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.615 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.461119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.461133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.461334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.461349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.461570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.461584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.461856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.461869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.462141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.462155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.462374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.462388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.462583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.462597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.462816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.462829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.463097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.463111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.463384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.463399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.463698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.463728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.463984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.464014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.464285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.464316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.464648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.464678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.464990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.465019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.465327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.465358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.465590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.465619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.465950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.465991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.466251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.466281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.466533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.466563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.466901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.466931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.467216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.467255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.467570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.467599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.467888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.467902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.468127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.468140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.468340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.468354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.468633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.468647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.468900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.468931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.469165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.469195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.469488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.469519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.469820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.469849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.470090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.470120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.470422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.470452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.470739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.470769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.471090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.471118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.471361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.471392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.471614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.471645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.471821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.471835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.472064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.472099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.472396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.616 [2024-07-15 22:43:21.472427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.616 qpair failed and we were unable to recover it. 00:26:57.616 [2024-07-15 22:43:21.472668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.472698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.472923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.472937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.473131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.473145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.473397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.473411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.473625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.473638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.473931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.473944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.474153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.474167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.474360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.474374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.474614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.474643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.474825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.474855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.475147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.475177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.475495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.475525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.475715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.475745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.476091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.476120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.476403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.476417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.476615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.476628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.476835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.476865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.477109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.477138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.477429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.477461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.477698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.477727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.478049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.478079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.478351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.478381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.478615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.478644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.478908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.478937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.479184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.479213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.479524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.479539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.479793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.479807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.480075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.480089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.480389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.480404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.480628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.480642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.480840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.480855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.481049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.481063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.481336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.481351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.481542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.481556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.481760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.481789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.482125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.482155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.482497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.482511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.482648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.482661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.482937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.482953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.483089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.483103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.617 qpair failed and we were unable to recover it. 00:26:57.617 [2024-07-15 22:43:21.483501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.617 [2024-07-15 22:43:21.483532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.483769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.483798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.484104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.484134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.484440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.484471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.484781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.484811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.484990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.485004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.485283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.485313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.485479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.485509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.485820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.485850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.486087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.486116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.486295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.486326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.486634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.486663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.486901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.486929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.487161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.487199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.487488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.487502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.487704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.487718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.487996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.488010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.488269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.488283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.488540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.488554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.488747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.488760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.488990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.489005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.489143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.489156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.489280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.489294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.489569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.489582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.489780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.489794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.490082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.490096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.490397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.490412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.490683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.490697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.490935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.490949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.491206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.491220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.491453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.491469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.491668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.491684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.491973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.492003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.492359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.618 [2024-07-15 22:43:21.492389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.618 qpair failed and we were unable to recover it. 00:26:57.618 [2024-07-15 22:43:21.492566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.492596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.492787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.492816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.493070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.493084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.493381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.493396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.493601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.493617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.493862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.493876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.494080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.494094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.494362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.494376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.494568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.494581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.494809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.494823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.495032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.495062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.495303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.495334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.495513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.495542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.495843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.495872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.496171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.496185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.496451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.496465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.496617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.496630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.496834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.496848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.497061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.497090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.497424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.497455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.497715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.497745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.498093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.498122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.498296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.498327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.498495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.498525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.498763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.498794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.499156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.499186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.499459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.499489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.499747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.499776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.500076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.500106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.500337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.500367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.500603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.500634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.500873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.500903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.501076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.501105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.501399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.501430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.501671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.501700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.502038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.502069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.502374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.502405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.619 [2024-07-15 22:43:21.502714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.619 [2024-07-15 22:43:21.502743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.619 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.503053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.503083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.503251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.503266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.503452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.503481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.503718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.503748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.504050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.504079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.504338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.504368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.504594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.504634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.504800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.504829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.505082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.505113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.505330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.505344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.505529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.505544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.505749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.505780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.506041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.506072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.506401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.506433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.506720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.506750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.506938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.506967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.507258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.507288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.507529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.507558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.507867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.507908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.508131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.508145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.508389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.508404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.508555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.508569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.508814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.508828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.509119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.509133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.509396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.509409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.509592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.509607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.509775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.509804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.510033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.510063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.510297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.510328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.510599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.510628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.510869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.510898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.511074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.511104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.511366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.511398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.511676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.511707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.512021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.512051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.512290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.512322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.512543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.512573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.512795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.512824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.513146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.513176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.513491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.620 [2024-07-15 22:43:21.513521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.620 qpair failed and we were unable to recover it. 00:26:57.620 [2024-07-15 22:43:21.513833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.513863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.514166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.514197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.514507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.514537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.514723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.514753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.515086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.515115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.515413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.515443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.515700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.515737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.516084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.516114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.516404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.516435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.516768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.516797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.517050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.517079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.517387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.517401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.517585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.517598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.517848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.517861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.518129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.518143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.518372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.518387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.518587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.518600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.518925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.518955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.519185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.519214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.519475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.519488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.519673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.519688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.519948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.519978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.520261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.520292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.520588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.520617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.520781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.520810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.521122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.521151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.521447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.521463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.521667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.521682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.521874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.521888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.522073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.522086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.522222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.522240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.522491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.522504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.522748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.522761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.523056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.523070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.523276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.621 [2024-07-15 22:43:21.523290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.621 qpair failed and we were unable to recover it. 00:26:57.621 [2024-07-15 22:43:21.523507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.523521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.523778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.523792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.524047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.524060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.524311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.524325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.524578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.524592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.524718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.524733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.524929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.524942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.525146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.525160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.525438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.525452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.525726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.525740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.526032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.526046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.526324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.526341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.526618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.526631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.526783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.526797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.527134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.527163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.527398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.527429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.527685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.527699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.527829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.527842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.528064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.528077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.528206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.528220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.528356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.528370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.528643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.528658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.528854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.528868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.529065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.529079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.529353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.529372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.529574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.529588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.529838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.529852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.530171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.530185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.530489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.530503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.530705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.530719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.530921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.530934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.531129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.531143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.531411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.531426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.531630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.531644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.531829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.531843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.532134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.532147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.532328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.532343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.532566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.622 [2024-07-15 22:43:21.532580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.622 qpair failed and we were unable to recover it. 00:26:57.622 [2024-07-15 22:43:21.532888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.532916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.533200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.533211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.533538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.533549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.533738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.533748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.534036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.534046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.534323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.534334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.534578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.534588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.534804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.534814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.535008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.535018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.535150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.535161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.535357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.535368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.535506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.535516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.535653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.535663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.535850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.535863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.536055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.536065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.536257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.536267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.536468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.536478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.536776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.536786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.536982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.536992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.537124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.537134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.537270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.537281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.537570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.537580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.537822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.537832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.538026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.538035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.538148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.538158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.538289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.538300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.538494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.538504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.538708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.623 [2024-07-15 22:43:21.538719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.623 qpair failed and we were unable to recover it. 00:26:57.623 [2024-07-15 22:43:21.538935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.538945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.539213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.539223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.539382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.539392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.539517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.539528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.539661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.539670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.539814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.539823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.540093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.540103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.540306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.540317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.540503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.540513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.540699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.540710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.540847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.540857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.541108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.541118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.541315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.541325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.541520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.541530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.541659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.541668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.541894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.541904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.542046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.542056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.542314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.542324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.542512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.542522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.542721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.542731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.542914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.542924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.543119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.543128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.543252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.543262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.543375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.543385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.543512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.543522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.543650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.543661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.624 [2024-07-15 22:43:21.543852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.624 [2024-07-15 22:43:21.543862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.624 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.544056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.544069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.544305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.544317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.544520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.544531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.544659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.544670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.544892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.544902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.545092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.545103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.545235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.545244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.545439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.545449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.545631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.545640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.545869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.545879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.546147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.546158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.546393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.546403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.546586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.546596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.546806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.546816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.547055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.547065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.547330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.547340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.902 [2024-07-15 22:43:21.547516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.902 [2024-07-15 22:43:21.547526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.902 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.547719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.547730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.547863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.547873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.548063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.548073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.548248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.548259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.548386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.548396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.548666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.548676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.548850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.548860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.549126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.549136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.549285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.549301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.549511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.549524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.549784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.549797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.549948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.549961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.550187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.550201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.550412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.550426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.550631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.550644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.550928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.550942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.551198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.551212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.551358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.551373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.551644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.551664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.551926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.551940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.552140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.552153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.552311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.552328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.552526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.552540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.552793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.552807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.553021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.553034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.553174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.553187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.553394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.553408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.553594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.553608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.553867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.553881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.554079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.554092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.554408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.554423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.554623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.554636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.554923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.554937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.555200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.555214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.555443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.555457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.555601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.555615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.555814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.555828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.556077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.556090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.556394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.556408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.556617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.903 [2024-07-15 22:43:21.556630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.903 qpair failed and we were unable to recover it. 00:26:57.903 [2024-07-15 22:43:21.556825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.556838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.557097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.557110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.557230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.557244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.557449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.557462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.557721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.557736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.557997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.558011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.558209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.558222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.558415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.558429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.558649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.558660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.558855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.558865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.559132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.559141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.559434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.559445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.559646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.559657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.559849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.559860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.560114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.560124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.560364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.560374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.560517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.560527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.560752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.560762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.561041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.561051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.561185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.561194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.561329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.561340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.561554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.561567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.561810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.561820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.562117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.562127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.562302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.562312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.562504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.562513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.562708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.562719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.562909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.562919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.563200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.563211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.563425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.563436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.563578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.563588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.563783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.563792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.563977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.563987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.564116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.564125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.564331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.564341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.564483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.564493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.564738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.564748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.565030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.565040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.565244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.565255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.565444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.565454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.904 [2024-07-15 22:43:21.565637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.904 [2024-07-15 22:43:21.565647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.904 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.565788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.565798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.566017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.566027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.566267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.566277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.566459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.566469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.566656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.566666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.566871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.566881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.567065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.567075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.567405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.567421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.567578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.567592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.567790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.567803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.568043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.568057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.568327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.568341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.568534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.568547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.568836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.568850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.569050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.569063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.569333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.569347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.569542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.569555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.569827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.569841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.570037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.570050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.570179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.570193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.570425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.570442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.570739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.570753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.570907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.570920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.571123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.571136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.571414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.571429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.571578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.571591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.571825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.571839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.572113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.572127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.572321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.572335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.572536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.572549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.572751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.572764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.573032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.573045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.573248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.573262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.573467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.573480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.573723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.573736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.574064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.574078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.574221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.574240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.574508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.574523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.574727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.574740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.575051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.905 [2024-07-15 22:43:21.575064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.905 qpair failed and we were unable to recover it. 00:26:57.905 [2024-07-15 22:43:21.575341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.575355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.575502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.575515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.575718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.575731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.576026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.576039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.576256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.576271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.576550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.576563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.576768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.576782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.576989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.577001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.577257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.577267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.577415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.577425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.577601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.577611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.577815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.577825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.578039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.578050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.578180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.578191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.578502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.578512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.578653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.578663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.578890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.578900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.579097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.579107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.579323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.579333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.579550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.579560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.579757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.579769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.579945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.579956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.580079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.580090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.580268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.580278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.580460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.580470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.580616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.580626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.580758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.580767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.581012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.581022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.581244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.581254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.581401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.581411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.581602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.581612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.581821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.581832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.582103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.582113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.906 qpair failed and we were unable to recover it. 00:26:57.906 [2024-07-15 22:43:21.582310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.906 [2024-07-15 22:43:21.582321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.582498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.582508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.582713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.582723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.582917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.582927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.583179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.583189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.583428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.583438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.583632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.583642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.583857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.583867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.584131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.584141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.584411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.584421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.584661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.584671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.584797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.584807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.585021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.585030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.585256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.585267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.585503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.585515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.585732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.585742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.585983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.585993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.586205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.586215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.586339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.586350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.586607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.586617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.586765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.586774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.587089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.587099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.587251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.587262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.587388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.587398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.587629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.587639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.587780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.587790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.587993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.588003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.588246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.588256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.588496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.588506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.588752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.588762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.589009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.589019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.589205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.589215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.589453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.589463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.589657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.589667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.589805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.589815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.590046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.590055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.590266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.590276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.590402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.590412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.590586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.590596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.590792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.590802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.591088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.907 [2024-07-15 22:43:21.591099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.907 qpair failed and we were unable to recover it. 00:26:57.907 [2024-07-15 22:43:21.591294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.591305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.591432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.591442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.591680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.591689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.591904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.591914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.592187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.592196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.592387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.592397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.592617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.592627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.592892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.592902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.593040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.593051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.593250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.593260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.593453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.593463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.593598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.593608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.593823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.593832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.594116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.594131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.594397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.594407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.594558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.594568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.594759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.594769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.594975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.594985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.595162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.595172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.595358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.595368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.595561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.595570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.595764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.595774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.596023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.596033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.596307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.596317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.596449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.596459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.596601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.596613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.596723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.596733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.596876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.596886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.597073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.597083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.597285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.597295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.597479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.597489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.597681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.597691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.597935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.597945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.598212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.598222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.598413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.598423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.598597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.598607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.598756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.598766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.599092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.599102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.599344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.599354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.599554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.908 [2024-07-15 22:43:21.599564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.908 qpair failed and we were unable to recover it. 00:26:57.908 [2024-07-15 22:43:21.599760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.599770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.599915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.599925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.600118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.600128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.600372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.600382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.600527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.600537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.600799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.600809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.601115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.601125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.601378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.601388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.601581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.601591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.601734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.601744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.601887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.601897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.602127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.602137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.602366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.602377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.602520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.602532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.602649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.602659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.602794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.602804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.603075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.603085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.603331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.603341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.603583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.603593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.603729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.603739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.603915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.603925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.604119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.604130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.604359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.604369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.604628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.604639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.604784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.604794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.604939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.604949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.605126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.605136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.605338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.605348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.605476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.605485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.605696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.605706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.605906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.605917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.606191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.606200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.606428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.606438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.606565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.606575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.606750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.606760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.607033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.607043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.607317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.607326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.607520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.607531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.607671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.607681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.607831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.607841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.909 qpair failed and we were unable to recover it. 00:26:57.909 [2024-07-15 22:43:21.608046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.909 [2024-07-15 22:43:21.608056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.608281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.608292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.608532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.608542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.608671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.608680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.608816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.608826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.609064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.609074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.609198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.609208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.609508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.609518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.609712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.609724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.610037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.610048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.610222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.610235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.610405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.610414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.610594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.610604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.610800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.610812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.611004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.611014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.611134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.611144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.611268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.611279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.611425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.611435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.611569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.611579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.611764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.611774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.612056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.612066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.612309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.612320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.612465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.612474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.612671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.612682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.612936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.612946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.613175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.613186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.613501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.613511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.613639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.613648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.613844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.613854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.614165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.614175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.614377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.614387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.614604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.614614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.614820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.614830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.615005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.615015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.615283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.615294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.615511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.615521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.615645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.615655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.615793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.615803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.616047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.616057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.616316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.616326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.910 [2024-07-15 22:43:21.616477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.910 [2024-07-15 22:43:21.616488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.910 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.616679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.616689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.616950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.616960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.617243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.617254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.617450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.617460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.617677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.617687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.617806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.617817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.618016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.618026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.618278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.618288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.618431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.618441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.618620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.618631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.618849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.618860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.619120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.619129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.619419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.619432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.619576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.619586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.619771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.619781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.620092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.620102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.620309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.620320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.620560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.620571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.620761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.620771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.620897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.620907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.621033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.621043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.621259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.621269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.621535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.621545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.621732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.621742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.621943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.621953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.622143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.622152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.622349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.622361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.622650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.622660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.622805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.622815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.623081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.623090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.623301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.623311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.623560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.623570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.623764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.623774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.623986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.911 [2024-07-15 22:43:21.623996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.911 qpair failed and we were unable to recover it. 00:26:57.911 [2024-07-15 22:43:21.624260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.624270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.624458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.624469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.624680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.624691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.624834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.624844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.624967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.624977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.625227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.625238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.625428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.625438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.625645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.625655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.625839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.625849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.626104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.626114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.626377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.626389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.626607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.626617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.626763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.626773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.627060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.627070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.627201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.627211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.627443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.627453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.627660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.627670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.627864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.627875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.628000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.628013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.628187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.628198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.628329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.628339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.628481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.628491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.628736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.628746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.628874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.628883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.629114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.629123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.629336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.629347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.629564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.629573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.629760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.629770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.629996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.630006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.630291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.630301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.630510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.630519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.630702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.630712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.630859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.630869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.631135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.631145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.631395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.631405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.631592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.631602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.631737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.631778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.632050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.632080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.632346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.632378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.632558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.632588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.632767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.912 [2024-07-15 22:43:21.632796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.912 qpair failed and we were unable to recover it. 00:26:57.912 [2024-07-15 22:43:21.633127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.633157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.633450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.633481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.633662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.633693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.633870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.633899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.634170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.634199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.634431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.634448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.634634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.634647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.634867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.634880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.635155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.635168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.635373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.635388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.635638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.635651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.635913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.635927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.636129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.636143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.636279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.636294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.636479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.636492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.636771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.636785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.637002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.637017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.637237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.637257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.637485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.637498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.637707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.637720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.638024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.638054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.638284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.638314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.638488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.638517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.638826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.638855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.639168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.639199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.639451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.639481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.639653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.639682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.639913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.639942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.640109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.640139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.640399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.640430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.640649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.640680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.641038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.641069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.641396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.641428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.641743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.641756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.642021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.642035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.642249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.642263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.642412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.642425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.642638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.642667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.642973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.643002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.643237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.643268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.913 [2024-07-15 22:43:21.643453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.913 [2024-07-15 22:43:21.643482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.913 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.643723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.643737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.644045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.644058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.644293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.644307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.644577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.644590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.644796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.644810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.645060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.645073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.645384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.645398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.645603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.645616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.645755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.645774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.646072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.646101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.646304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.646335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.646522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.646552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.646806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.646835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.647133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.647162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.647479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.647509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.647703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.647734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.648026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.648061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.648394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.648425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.648656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.648669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.648799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.648812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.649078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.649091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.649332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.649345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.649605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.649635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.649916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.649946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.650282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.650312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.650483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.650513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.650794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.650807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.651075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.651089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.651286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.651300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.651440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.651482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.651708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.651737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.652046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.652075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.652364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.652394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.652694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.652724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.653042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.653071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.653379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.653392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.653620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.653633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.653789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.653802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.654035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.654065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.654407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.654438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.914 [2024-07-15 22:43:21.654757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.914 [2024-07-15 22:43:21.654786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.914 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.655042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.655070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.655403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.655416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.655563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.655576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.655770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.655784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.656085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.656115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.656394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.656424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.656668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.656697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.656960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.656989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.657298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.657330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.657589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.657619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.657852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.657882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.658120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.658149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.658437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.658452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.658628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.658642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.658915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.658945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.659243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.659279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.659524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.659539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.659662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.659675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.659826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.659840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.660074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.660088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.660370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.660384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.660634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.660648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.660781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.660794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.661068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.661098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.661383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.661414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.661624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.661637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.661864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.661877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.662090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.662103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.662379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.662393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.662594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.662608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.662834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.662848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.663146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.663161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.663375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.663389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.663580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.663594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.663776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.915 [2024-07-15 22:43:21.663789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.915 qpair failed and we were unable to recover it. 00:26:57.915 [2024-07-15 22:43:21.663998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.664011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.664130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.664144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.664343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.664385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.664605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.664635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.664897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.664925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.665212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.665251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.665490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.665525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.665675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.665689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.665960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.665989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.666343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.666373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.666606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.666619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.666778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.666791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.667081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.667094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.667341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.667356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.667538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.667551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.667667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.667680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.667884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.667898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.668169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.668183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.668378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.668393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.668525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.668539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.668692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.668708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.668891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.668905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.669028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.669042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.669172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.669186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.669410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.669424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.669672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.669686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.669821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.669834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.670085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.670099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.670234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.670248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.670418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.670432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.670754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.670784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.671021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.671051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.671354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.671385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.671542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.671571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.671817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.671846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.672137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.672167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.672427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.672458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.672640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.672653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.672919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.672932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.673206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.673220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.916 [2024-07-15 22:43:21.673447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.916 [2024-07-15 22:43:21.673461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.916 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.673666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.673680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.673907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.673920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.674116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.674129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.674416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.674430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.674581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.674595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.674807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.674836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.675070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.675100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.675392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.675406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.675544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.675575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.675900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.675930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.676169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.676198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.676379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.676409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.676694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.676723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.677038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.677066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.677364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.677394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.677650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.677680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.677914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.677944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.678248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.678278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.678539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.678569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.678734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.678768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.679048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.679078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.679316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.679347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.679520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.679534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.679744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.679774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.680077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.680106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.680374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.680406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.680650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.680679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.680929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.680958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.681272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.681314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.681460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.681473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.681683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.681713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.681880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.681910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.682165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.682194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.682446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.682460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.682650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.682663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.682878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.682907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.683134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.683164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.683381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.683395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.683579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.683609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.683776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.683805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.917 qpair failed and we were unable to recover it. 00:26:57.917 [2024-07-15 22:43:21.684019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.917 [2024-07-15 22:43:21.684048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.684285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.684315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.684553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.684581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.684807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.684820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.685004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.685018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.685305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.685335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.685538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.685569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.685793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.685806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.686025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.686038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.686253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.686267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.686565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.686579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.686762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.686776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.687019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.687032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.687284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.687298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.687504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.687517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.687653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.687681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.687987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.688016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.688336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.688367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.688589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.688618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.688800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.688839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.689094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.689123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.689354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.689384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.689624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.689653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.689982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.690022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.690305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.690335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.690509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.690539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.690734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.690747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.691024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.691053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.691373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.691403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.691652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.691665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.691801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.691814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.692104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.692133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.692360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.692392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.692700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.692730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.693080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.693109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.693384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.693398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.693552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.693565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.693843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.693872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.694092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.694122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.694456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.694486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.694753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.694783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.918 qpair failed and we were unable to recover it. 00:26:57.918 [2024-07-15 22:43:21.695131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.918 [2024-07-15 22:43:21.695160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.695356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.695387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.695622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.695651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.696054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.696084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.696370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.696400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.696641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.696671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.697035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.697064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.697319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.697349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.697631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.697645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.697781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.697795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.698091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.698120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.698357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.698388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.698639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.698680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.698881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.698894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.699091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.699105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.699316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.699330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.699494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.699508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.699708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.699737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.699980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.700014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.700328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.700359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.700597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.700626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.700959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.700988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.701214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.701263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.701467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.701481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.701629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.701643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.701879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.701908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.702092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.702121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.702340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.702370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.702601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.702615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.702913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.702926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.703197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.703210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.703371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.703385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.703538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.703552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.703693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.703707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.703970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.703983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.919 [2024-07-15 22:43:21.704194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.919 [2024-07-15 22:43:21.704207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.919 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.704408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.704422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.704649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.704678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.704999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.705028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.705260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.705290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.705599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.705630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.705923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.705937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.706140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.706154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.706372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.706386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.706601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.706614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.706830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.706846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.707045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.707059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.707330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.707345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.707548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.707562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.707811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.707825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.708037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.708050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.708355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.708369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.708557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.708570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.708769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.708782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.708996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.709010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.709295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.709309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.709507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.709521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.709669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.709682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.709975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.710004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.710233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.710264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.710456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.710486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.710718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.710748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.711020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.711049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.711242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.711272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.711500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.711529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.711719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.711748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.712112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.712140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.712383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.712415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.712646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.712675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.712970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.712999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.713248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.713278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.713540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.713570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.713805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.713819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.713973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.713986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.714264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.714294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.714516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.920 [2024-07-15 22:43:21.714547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.920 qpair failed and we were unable to recover it. 00:26:57.920 [2024-07-15 22:43:21.714735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.714765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.715116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.715147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.715442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.715471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.715693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.715722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.715938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.715952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.716238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.716252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.716452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.716465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.716734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.716748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.717070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.717083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.717214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.717235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.717453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.717466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.717659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.717672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.717866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.717879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.718068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.718082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.718332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.718346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.718551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.718564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.718788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.718801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.719053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.719067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.719316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.719330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.719529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.719543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.719698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.719712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.720024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.720054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.720300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.720343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.720605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.720619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.720768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.720781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.721010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.721024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.721213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.721231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.721436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.721450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.721652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.721665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.721960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.721974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.722278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.722293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.722591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.722604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.722789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.722802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.723015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.723029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.723275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.723289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.723491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.723504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.723773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.723787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.724004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.724018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.724295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.724309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.724558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.724571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.921 qpair failed and we were unable to recover it. 00:26:57.921 [2024-07-15 22:43:21.724840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.921 [2024-07-15 22:43:21.724854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.725145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.725159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.725375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.725389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.725613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.725627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.725823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.725837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.726197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.726233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.726422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.726451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.726700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.726730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.726976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.726989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.727241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.727257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.727379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.727392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.727594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.727608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.727906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.727920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.728167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.728181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.728394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.728408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.728559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.728572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.728834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.728863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.729203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.729240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.729555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.729568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.729766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.729805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.730123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.730152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.730324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.730354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.730658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.730687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.731017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.731047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.731359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.731388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.731577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.731607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.731901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.731931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.732155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.732184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.732352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.732382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.732615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.732644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.732884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.732898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.733104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.733117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.733338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.733352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.733498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.733512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.733725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.733754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.734094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.734122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.734424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.734454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.734690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.734719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.734987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.735016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.735256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.735286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.735482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.922 [2024-07-15 22:43:21.735512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.922 qpair failed and we were unable to recover it. 00:26:57.922 [2024-07-15 22:43:21.735768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.735797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.736089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.736118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.736411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.736441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.736669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.736683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.736889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.736902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.737194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.737208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.737430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.737444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.737703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.737716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.737963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.737979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.738238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.738252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.738513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.738527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.738731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.738744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.739049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.739062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.739315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.739328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.739526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.739540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.739690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.739703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.739912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.739941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.740231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.740262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.740599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.740629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.740849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.740878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.741039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.741068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.741356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.741387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.741751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.741780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.742086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.742115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.742399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.742429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.742611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.742649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.742791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.742805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.743061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.743074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.743282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.743313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.743599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.743628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.743941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.743970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.744274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.744304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.744591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.744604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.744965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.744994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.745308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.745338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.745618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.745648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.745831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.745860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.746166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.746194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.746510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.746541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.923 qpair failed and we were unable to recover it. 00:26:57.923 [2024-07-15 22:43:21.746785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.923 [2024-07-15 22:43:21.746814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.747129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.747159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.747421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.747452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.747626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.747655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.747829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.747859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.748197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.748235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.748524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.748553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.748808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.748837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.749092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.749105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.749404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.749421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.749623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.749637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.749914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.749927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.750124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.750137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.750347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.750361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.750553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.750567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.750772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.750801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.751048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.751077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.751385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.751427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.751618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.751631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.751831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.751845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.752076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.752090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.752407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.752421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.752673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.752687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.753018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.753048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.753377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.753407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.753658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.753687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.753879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.753909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.754199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.754236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.754548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.754579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.754900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.754929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.755149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.755178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.755439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.755469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.755648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.755661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.755862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.755877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.756079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.756108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.756334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.756365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.756623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.924 [2024-07-15 22:43:21.756659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.924 qpair failed and we were unable to recover it. 00:26:57.924 [2024-07-15 22:43:21.756872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.756885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.757133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.757147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.757346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.757359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.757576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.757590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.757808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.757838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.758028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.758058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.758369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.758400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.758578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.758607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.758922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.758951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.759302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.759331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.759501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.759514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.759671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.759684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.759869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.759903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.760149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.760179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.760463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.760494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.760757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.760786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.761043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.761057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.761326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.761340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.761592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.761605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.761755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.761768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.762142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.762171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.762419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.762449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.762605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.762619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.762771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.762784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.762971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.762985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.763258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.763289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.763480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.763509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.763752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.763781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.764137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.764166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.764374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.764403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.764714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.764744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.764923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.764952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.765296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.765327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.765586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.765599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.765875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.765888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.766153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.766167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.766467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.766497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.766783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.766812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.767056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.767086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.767358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.767389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.925 [2024-07-15 22:43:21.767627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.925 [2024-07-15 22:43:21.767656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.925 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.767968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.767998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.768322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.768352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.768521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.768551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.768768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.768781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.769095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.769108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.769292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.769306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.769498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.769512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.769769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.769798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.770083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.770113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.770340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.770371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.770641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.770670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.770980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.770995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.771195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.771209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.771372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.771386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.771544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.771558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.771784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.771797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.772159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.772188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.772417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.772447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.772732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.772761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.773060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.773089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.773395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.773427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.773621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.773651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.773832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.773846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.774015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.774044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.774365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.774396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.774693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.774722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.775045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.775074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.775423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.775454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.775646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.775676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.775960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.775989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.776301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.776331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.776516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.776545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.776787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.776816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.777110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.777140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.777469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.777500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.777731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.777760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.778014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.778043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.778294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.778325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.778523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.778552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.778745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.778775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.926 [2024-07-15 22:43:21.779016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.926 [2024-07-15 22:43:21.779045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.926 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.779282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.779313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.779532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.779545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.779743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.779756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.779909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.779922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.780134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.780148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.780355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.780369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.780588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.780601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.780740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.780753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.781034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.781047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.781304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.781318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.781522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.781538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.781764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.781778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.781940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.781954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.782155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.782168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.782362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.782376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.782581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.782594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.782753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.782767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.782986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.783000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.783250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.783264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.783399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.783413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.783568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.783581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.783775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.783788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.784066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.784079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.784265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.784279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.784438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.784452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.784605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.784618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.784813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.784826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.785027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.785041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.785292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.785305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.785517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.785530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.785784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.785797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.786058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.786071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.786342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.786357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.786638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.786651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.786859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.786873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.787145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.787158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.787358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.787372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.787651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.787665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.787862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.787875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.788128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.927 [2024-07-15 22:43:21.788141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.927 qpair failed and we were unable to recover it. 00:26:57.927 [2024-07-15 22:43:21.788284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.788298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.788544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.788558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.788711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.788724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.788919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.788933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.789185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.789199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.789413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.789426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.789628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.789641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.789848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.789862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.790052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.790066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.790317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.790332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.790549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.790565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.790768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.790782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.791049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.791062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.791319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.791334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.791632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.791645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.791795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.791808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.792073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.792087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.792350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.792364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.792558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.792572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.792721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.792735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.792977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.792991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.793269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.793284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.793562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.793576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.793721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.793735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.793953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.793966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.794168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.794181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.794365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.794379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.794581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.794594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.794790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.794804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.795038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.795052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.795196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.795209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.795452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.795466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.795675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.795688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.795963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.795977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.796245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.796259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.928 qpair failed and we were unable to recover it. 00:26:57.928 [2024-07-15 22:43:21.796512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.928 [2024-07-15 22:43:21.796525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.796690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.796703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.797051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.797088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.797365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.797383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.797664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.797680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.797810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.797824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.798127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.798141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.798347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.798362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.798637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.798651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.798920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.798934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.799127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.799141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.799400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.799414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.799643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.799657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.799875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.799890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.800190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.800204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.800542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.800556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.800767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.800781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.801010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.801024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.801281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.801297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.801501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.801515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.801717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.801731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.801879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.801893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.802097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.802110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.802311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.802326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.802463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.802476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.802751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.802765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.803067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.803081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.803396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.803410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.803678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.803692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.803941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.803958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.804207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.804221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.804530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.804544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.804750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.804764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.805071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.805085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.805238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.805252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.805536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.805550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.805772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.805786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.806069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.806082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.806330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.806344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.806625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.806638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.806784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.929 [2024-07-15 22:43:21.806798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.929 qpair failed and we were unable to recover it. 00:26:57.929 [2024-07-15 22:43:21.806984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.806999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.807273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.807289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.807533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.807548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.807764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.807778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.807990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.808004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.808288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.808301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.808603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.808617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.808753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.808767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.809113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.809128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.809417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.809431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.809622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.809636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.809788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.809802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.809920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.809934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.810184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.810198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.810451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.810465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.810614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.810630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.810839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.810852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.810989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.811003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.811309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.811323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.811595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.811609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.811811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.811825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.812016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.812029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.812169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.812184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.812410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.812424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.812609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.812623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.812885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.812899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.813104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.813118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.813372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.813386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.813582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.813596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.813745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.813759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.813986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.813999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.814260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.814274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.814407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.814421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.814621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.814636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.814838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.814852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.815124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.815137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.815331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.815346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.815543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.815557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.815738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.815752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.816035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.816049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.816186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.930 [2024-07-15 22:43:21.816200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.930 qpair failed and we were unable to recover it. 00:26:57.930 [2024-07-15 22:43:21.816431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.816446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.816649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.816666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.816848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.816862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.817135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.817149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.817352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.817366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.817568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.817581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.817748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.817762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.817954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.817968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.818216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.818234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.818452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.818466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.818662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.818676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.818865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.818879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.819060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.819074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.819298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.819312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.819465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.819479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.819728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.819742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.819940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.819954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.820145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.820159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.820460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.820474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.820674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.820688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.820993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.821007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.821271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.821285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.821426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.821440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.821667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.821680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.821883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.821897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.822194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.822208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.822468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.822483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.822680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.822694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.822892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.822905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.823181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.823195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.823400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.823415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.823620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.823634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.823933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.823948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.824208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.824222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.824485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.824500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.824716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.824730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.825011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.825024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.825229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.825243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.825448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.825462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.825688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.825702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.825839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.931 [2024-07-15 22:43:21.825853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.931 qpair failed and we were unable to recover it. 00:26:57.931 [2024-07-15 22:43:21.826096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.826110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.826323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.826349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.826606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.826617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.826800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.826811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.827010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.827020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.827303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.827313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.827554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.827564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.827838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.827849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.828029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.828038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.828231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.828242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.828440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.828450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.828587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.828597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.828771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.828781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.829010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.829020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.829194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.829209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.829428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.829438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.829634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.829644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.829822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.829832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.830088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.830099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.830314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.830324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.830456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.830466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.830600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.830609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.830801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.830811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.831124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.831134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.831376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.831386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.831513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.831523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.831711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.831721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.831932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.831942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.832235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.832245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.832438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.832448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.832690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.832700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.832962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.832971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.833234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.833245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.833447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.833457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.833594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.932 [2024-07-15 22:43:21.833604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.932 qpair failed and we were unable to recover it. 00:26:57.932 [2024-07-15 22:43:21.833801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.833811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.833946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.833955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.834147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.834157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.834360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.834371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.834567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.834578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.834723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.834733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.834873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.834883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.835071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.835082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.835296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.835307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.835447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.835457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.835698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.835708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.836018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.836028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.836237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.836247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.836486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.836496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.836692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.836703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.836985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.836995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.837177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.837187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.837319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.837330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.837529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.837540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.837725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.837735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.837874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.837884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.838057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.838067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.838191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.838201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.838448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.838459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.838644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.838654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.838844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.838855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.839119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.839128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.839243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.839253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.839393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.839402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.839647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.839657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.839795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.839804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.839995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.840005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.840268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.840279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.840536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.840547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.840805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.840815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.841076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.841086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.841282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.841293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.841437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.841447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.841636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.841646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.841765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.841775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.933 [2024-07-15 22:43:21.842011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.933 [2024-07-15 22:43:21.842021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.933 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.842198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.842208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.842420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.842430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.842560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.842571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.842711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.842721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.842902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.842913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.843101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.843113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.843307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.843317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.843507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.843517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.843630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.843642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.843928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.843938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.844118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.844128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.844436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.844447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.844688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.844699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.844957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.844968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.845154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.845164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.845364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.845374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.845630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.845640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.845820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.845830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.846055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.846066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.846194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.846204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.846397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.846407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.846599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.846609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.846752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.846761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.847061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.847070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.847341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.847351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.847540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.847550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.847795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.847805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.848072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.848082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.848280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.848290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.848490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.848501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.848743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.848753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.848896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.848906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.849126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.849135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.849414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.849425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.849616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.849626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.849763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.849773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.849970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.849980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.850187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.850198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.850336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.850347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.850593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.850603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.934 qpair failed and we were unable to recover it. 00:26:57.934 [2024-07-15 22:43:21.850742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.934 [2024-07-15 22:43:21.850751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-07-15 22:43:21.850956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-07-15 22:43:21.850966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-07-15 22:43:21.851095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-07-15 22:43:21.851105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-07-15 22:43:21.851400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-07-15 22:43:21.851411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-07-15 22:43:21.851597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-07-15 22:43:21.851608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-07-15 22:43:21.851749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-07-15 22:43:21.851761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-07-15 22:43:21.852066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-07-15 22:43:21.852077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-07-15 22:43:21.852334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-07-15 22:43:21.852345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-07-15 22:43:21.852521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-07-15 22:43:21.852531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:57.935 [2024-07-15 22:43:21.852660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.935 [2024-07-15 22:43:21.852670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:57.935 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.852980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.852991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.853129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.853141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.853284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.853295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.853473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.853484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.853625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.853635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.853823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.853832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.853998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.854009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.854216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.854230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.854427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.854437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.854556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.854567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.854693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.854703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.854886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.854896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.855103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.855113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.855329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.855339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.210 [2024-07-15 22:43:21.855581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.210 [2024-07-15 22:43:21.855592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.210 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.855782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.855792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.855970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.855980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.856130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.856140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.856319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.856330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.856482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.856492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.856679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.856689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.856875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.856885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.857080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.857090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.857335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.857346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.857490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.857501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.857694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.857704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.857905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.857915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.858123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.858133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.858324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.858334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.858604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.858614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.858858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.858868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.859120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.859130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.859424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.859435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.859632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.859642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.859878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.859887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.860129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.860140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.860336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.860347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.860476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.860486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.860682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.860693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.860940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.860950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.861110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.861120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.861392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.861402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.861538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.861548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.861688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.861698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.861909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.861918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.862101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.862111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.862332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.862342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.862610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.862620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.862814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.862824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.863124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.863135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.863318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.863329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.863595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.863605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.863751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.863761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.863982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.863992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.864232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.864243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.864472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.864482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.864619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.864630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.864759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.864769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.864904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.864914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.865130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.865141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.865330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.865341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.865530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.865540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.865804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.865814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.865935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.865945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.866133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.866143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.866384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.866394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.866522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.866533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.866727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.866737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.866930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.866940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.867209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.867219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.211 qpair failed and we were unable to recover it. 00:26:58.211 [2024-07-15 22:43:21.867371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.211 [2024-07-15 22:43:21.867381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.867601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.867611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.867789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.867799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.868102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.868131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.868417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.868448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.868707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.868742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.868917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.868927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.869118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.869148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.869391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.869422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.869653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.869692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.869931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.869941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.870055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.870065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.870326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.870336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.870536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.870547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.870801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.870810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.871021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.871031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.871233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.871244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.871438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.871448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.871638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.871668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.871867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.871896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.872184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.872214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.872401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.872433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.872666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.872696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.872985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.872995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.873125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.873135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.873362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.873393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.873676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.873706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.874032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.874062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.874299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.874330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.874569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.874599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.874790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.874819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.875135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.875165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.875392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.875403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.875645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.875655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.875782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.875792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.876060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.876071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.876267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.876278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.876438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.876448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.876639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.876648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.876915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.876924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.877056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.877065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.877341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.877371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.877628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.877658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.877944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.877974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.878276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.878287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.878479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.878491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.878603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.878613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.878809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.878819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.879017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.879027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.879297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.879328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.879510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.879540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.879776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.879805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.880050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.880080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.880303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.880333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.880613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.880623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.880770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.880780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.880997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.881007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.212 [2024-07-15 22:43:21.881130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.212 [2024-07-15 22:43:21.881140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.212 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.881385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.881396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.881592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.881626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.881792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.881822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.882067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.882096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.882326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.882336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.882528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.882538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.882803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.882813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.882941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.882951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.883143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.883176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.883481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.883512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.883749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.883779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.884038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.884068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.884364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.884375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.884520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.884530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.884670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.884680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.884969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.884998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.885240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.885271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.885459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.885489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.885676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.885706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.885867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.885877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.886115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.886145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.886440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.886470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.886754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.886784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.887112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.887123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.887369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.887380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.887580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.887589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.887858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.887868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.888084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.888096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.888350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.888360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.888552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.888563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.888745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.888755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.888916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.888926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.889113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.889123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.889390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.889401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.889592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.889602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.889811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.889822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.890129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.890139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.890280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.890290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.890484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.890493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.890701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.890711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.890893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.890903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.891123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.891136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.891290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.891301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.891501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.891511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.891783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.891794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.891934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.891944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.892189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.892199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.892462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.892472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.892655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.213 [2024-07-15 22:43:21.892665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.213 qpair failed and we were unable to recover it. 00:26:58.213 [2024-07-15 22:43:21.892865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.892875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.893159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.893170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.893365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.893376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.893587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.893598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.893737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.893747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.893936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.893947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.894185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.894215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.894533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.894563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.894793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.894823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.895058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.895088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.895272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.895283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.895508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.895538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.895723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.895752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.896275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.896294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.896500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.896511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.896754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.896765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.896995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.897008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.897210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.897221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.897522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.897536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.897682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.897692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.897896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.897928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.898162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.898192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.898483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.898515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.898746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.898775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.898949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.898979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.899156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.899186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.899478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.899510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.899745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.899774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.900142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.900172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.900503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.900534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.900764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.900794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.901164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.901194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.901567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.901599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.901786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.901815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.902058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.902068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.902284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.902294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.902482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.902493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.902614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.902624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.902864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.902875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.903024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.903035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.903286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.903316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.903555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.903585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.903812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.903842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.904070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.904099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.904322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.904352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.904403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cc000 (9): Bad file descriptor 00:26:58.214 [2024-07-15 22:43:21.904681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.904749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.905013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.905048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.905294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.905308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.905492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.905506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.905639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.905653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.905848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.905862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.906025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.906039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.906310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.906342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.906517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.906547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.906788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.906821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.907017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.907027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.907145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.214 [2024-07-15 22:43:21.907155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.214 qpair failed and we were unable to recover it. 00:26:58.214 [2024-07-15 22:43:21.907368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.907379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.907539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.907549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.907734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.907744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.908030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.908040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.908285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.908295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.908435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.908445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.908658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.908668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.908858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.908869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.909051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.909060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.909266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.909277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.909416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.909426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.909623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.909633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.909780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.909790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.910091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.910101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.910419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.910432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.910608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.910618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.910745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.910755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.911049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.911059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.911253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.911264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.911452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.911462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.911593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.911603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.911723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.911732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.911995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.912005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.912200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.912210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.912358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.912369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.912561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.912571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.912757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.912767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.913037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.913047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.913344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.913355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.913533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.913543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.913786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.913796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.913902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.913911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.914099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.914109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.914408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.914418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.914661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.914671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.914816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.914837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.915131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.915160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.915408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.915439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.915660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.915690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.916026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.916056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.916282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.916312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.916639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.916707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.917045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.917079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.917417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.917449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.917637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.917666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.917905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.917935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.918215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.918236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.918509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.918523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.918671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.918685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.918900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.918913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.919220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.919263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.919504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.919533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.919819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.919848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.920140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.920154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.920417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.920436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.920657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.215 [2024-07-15 22:43:21.920670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.215 qpair failed and we were unable to recover it. 00:26:58.215 [2024-07-15 22:43:21.920855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.920869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.921101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.921130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.921458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.921489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.921780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.921809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.922165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.922195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.922573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.922641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.922919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.922952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.923189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.923220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.923549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.923580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.923864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.923894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.924200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.924238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.924408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.924438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.924682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.924711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.925004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.925018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.925214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.925232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.925429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.925442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.925649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.925678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.925917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.925947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.926201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.926241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.926430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.926461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.926769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.926798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.927098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.927111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.927309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.927324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.927474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.927488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.927672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.927686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.927972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.928002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.928339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.928360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.928575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.928589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.928839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.928852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.929115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.929128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.929412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.929425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.929703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.929716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.930008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.930038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.930372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.930402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.930641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.930670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.930943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.930972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.931307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.931338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.931623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.931653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.931986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.932020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.932249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.932263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.932529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.932542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.932746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.932760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.932969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.932982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.933187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.933217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.933531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.933562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.933889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.933918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.934239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.934253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.934534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.934548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.934806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.934819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.935035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.935049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.935339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.935354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.935556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.216 [2024-07-15 22:43:21.935569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.216 qpair failed and we were unable to recover it. 00:26:58.216 [2024-07-15 22:43:21.935760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.935774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.935975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.935988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.936127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.936141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.936398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.936428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.936721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.936750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.937002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.937031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.937383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.937413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.937718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.937748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.937966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.937995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.938306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.938320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.938536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.938549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.938814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.938828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.939045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.939059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.939287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.939323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.939578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.939605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.939882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.939894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.940165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.940176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.940390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.940401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.940628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.940658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.940908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.940937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.941196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.941234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.941451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.941481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.941735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.941765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.942098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.942127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.942435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.942466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.942778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.942808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.943109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.943147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.943442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.943453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.943656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.943666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.943854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.943864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.944121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.944131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.944374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.944384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.944643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.944653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.944924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.944953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.945274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.945305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.945604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.945634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.945895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.945925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.946245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.946277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.946530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.946559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.946912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.946942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.947253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.947285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.947607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.947637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.947930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.947970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.948207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.948262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.948546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.948576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.948883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.948912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.949233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.949265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.949563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.949593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.949898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.949927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.950211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.950251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.950562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.950592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.950811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.950841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.951074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.951104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.951421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.951439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.951741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.951755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.952046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.952076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.952336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.952368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.217 [2024-07-15 22:43:21.952609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.217 [2024-07-15 22:43:21.952639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.217 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.952896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.952925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.953193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.953235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.953389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.953403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.953686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.953715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.953934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.953963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.954195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.954231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.954479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.954509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.954841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.954871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.955237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.955256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.955564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.955578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.955764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.955778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.956072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.956086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.956298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.956312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.956496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.956510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.956785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.956814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.957064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.957094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.957408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.957438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.957657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.957687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.957976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.958005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.958341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.958371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.958637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.958667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.958895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.958925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.959251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.959282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.959574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.959604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.959836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.959866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.960083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.960112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.960391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.960405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.960655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.960669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.960974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.960987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.961127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.961141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.961350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.961381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.961692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.961722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.962027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.962057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.962319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.962355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.962642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.962655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.962797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.962811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.963098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.963127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.963423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.963454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.963704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.963734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.964035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.964065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.964309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.964340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.964631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.964644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.964843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.964856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.965054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.965068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.965356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.965388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.965606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.965635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.965952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.965990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.966211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.966229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.966347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.966361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.966643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.966656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.966842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.966856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.967102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.967115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.967332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.967345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.967643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.967656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.967955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.967968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.968185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.968198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.968406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.218 [2024-07-15 22:43:21.968420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.218 qpair failed and we were unable to recover it. 00:26:58.218 [2024-07-15 22:43:21.968737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.968751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.969052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.969082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.969285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.969321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.969557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.969571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.969773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.969786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.970064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.970077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.970326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.970340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.970589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.970603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.970872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.970886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.971151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.971164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.971442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.971456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.971666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.971680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.971948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.971961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.972245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.972260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.972508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.972522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.972703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.972716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.972946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.972959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.973142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.973155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.973392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.973410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.973625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.973655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.973997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.974026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.974261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.974291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.974530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.974559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.974811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.974840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.975147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.975176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.975347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.975361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.975556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.975586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.975771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.975800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.976147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.976160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.976361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.976375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.976558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.976572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.976830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.976859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.977166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.977196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.977506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.977538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.977846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.977875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.978180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.978209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.978474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.978505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.978723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.978753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.979047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.979077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.979406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.979437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.979770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.979800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.980034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.980064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.980283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.980314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.980530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.980544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.980792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.980805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.981080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.981093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.981279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.981294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.981603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.981633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.981969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.981998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.219 [2024-07-15 22:43:21.982257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.219 [2024-07-15 22:43:21.982287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.219 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.982619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.982649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.982866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.982896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.983073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.983103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.983275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.983289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.983419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.983433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.983655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.983669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.983865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.983879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.984111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.984125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.984308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.984325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.984573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.984603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.984817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.984847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.985157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.985187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.985431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.985445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.985579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.985592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.985743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.985756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.986036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.986065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.986305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.986335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.986635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.986665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.986947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.986961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.987149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.987164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.987355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.987370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.987646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.987676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.987917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.987947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.988244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.988275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.988575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.988605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.988968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.988997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.989247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.989261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.989459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.989473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.989610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.989623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.989929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.989959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.990243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.990273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.990518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.990548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.990801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.990832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.991140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.991169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.991414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.991446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.991779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.991810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.992096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.992126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.992405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.992420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.992607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.992621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.992831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.992845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.993048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.993061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.993298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.993313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.993591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.993604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.993797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.993810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.994038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.994051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.994274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.994288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.994485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.994498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.994698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.994711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.994967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.995002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.995308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.995322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.995542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.995556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.995779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.995792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.995993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.996007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.996156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.996169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.996466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.220 [2024-07-15 22:43:21.996497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.220 qpair failed and we were unable to recover it. 00:26:58.220 [2024-07-15 22:43:21.996740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:21.996769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:21.997018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:21.997047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:21.997295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:21.997326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:21.997618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:21.997647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:21.997892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:21.997922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:21.998236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:21.998266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:21.998561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:21.998590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:21.998906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:21.998936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:21.999162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:21.999192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:21.999770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:21.999791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.000078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.000092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.000282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.000297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.000501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.000515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.000700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.000714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.000921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.000934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.001185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.001199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.001417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.001431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.001623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.001637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.001790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.001803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.002108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.002122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.002320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.002334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.002586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.002599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.002919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.002933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.003154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.003167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.003390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.003404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.003605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.003619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.003831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.003845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.004143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.004157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.004360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.004375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.004656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.004671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.004940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.004953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.005152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.005165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.005355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.005369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.005570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.005586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.005807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.005820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.006066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.006079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.006379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.006393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.006540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.006554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.006828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.006841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.007035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.007049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.007307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.007321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.007572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.007586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.007850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.007864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.008137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.008151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.008419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.008434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.008643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.008669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.008858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.008871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.009089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.009103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.009400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.009413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.009607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.009620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.009816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.009829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.010095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.010109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.010362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.010376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.010664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.010678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.010871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.010885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.011082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.011095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.011334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.011348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.011548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.221 [2024-07-15 22:43:22.011562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.221 qpair failed and we were unable to recover it. 00:26:58.221 [2024-07-15 22:43:22.011776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.011790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.012029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.012043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.012176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.012189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.012447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.012478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.012788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.012818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.013046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.013075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.013350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.013364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.013577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.013591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.013710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.013723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.014000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.014029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.014351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.014382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.014592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.014605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.014801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.014815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.015007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.015020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.015280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.015294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.015506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.015523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.015672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.015685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.016026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.016055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.016398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.016430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.016654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.016683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.016992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.017022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.017355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.017390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.017698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.017728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.017903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.017932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.018242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.018273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.018512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.018525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.018667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.018681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.018865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.018878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.019093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.019106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.023238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.023269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.023584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.023603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.023836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.023851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.024109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.024123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.024356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.024371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.024568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.024582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.024901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.024916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.025218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.025239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.025517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.025531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.025812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.025827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.026089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.026103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.026313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.026327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.026600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.026612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.026761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.026773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.026971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.026982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.027257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.027270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.027451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.027462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.027723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.027734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.027929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.027942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.028138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.028150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.028423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.028439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.028714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.028727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.029035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.029051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.029337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.029353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.029491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.029503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.222 [2024-07-15 22:43:22.029732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.222 [2024-07-15 22:43:22.029745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.222 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.029992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.030010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.030197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.030210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.030500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.030513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.030779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.030791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.031071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.031083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.031326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.031339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.031551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.031563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.031855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.031867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.032070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.032082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.032386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.032399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.032593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.032605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.032868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.032880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.033158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.033170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.033457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.033470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.033719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.033731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.033977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.033989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.034256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.034268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.034519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.034531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.034801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.034813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.035107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.035120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.035318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.035331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.035528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.035540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.035720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.035732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.035999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.223 [2024-07-15 22:43:22.036011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.223 qpair failed and we were unable to recover it. 00:26:58.223 [2024-07-15 22:43:22.036137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.036148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.036416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.036429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.036696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.036708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.037006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.037018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.037273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.037285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.037532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.037544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.037808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.037821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.038061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.038072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.038344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.038357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.038498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.038510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.038690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.038702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.038976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.038988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.039129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.039141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.039400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.039412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.039605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.039618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.039862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.039874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.040146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.040160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.040378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.040391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.040537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.040549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.040826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.040837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.041111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.041123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.041253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.041265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.041397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.041409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.041603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.041615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.041812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.041824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.041950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.041962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.042209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.042221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.042410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.042423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.042646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.042658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.042838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.042850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.043036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.043048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.043324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.043336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.043553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.043565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.043778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.043790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.044003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.044015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.044235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.044248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.044493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.044505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.044792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.044804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.044989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.045001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.045221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.045239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.045484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.045496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.045704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.045716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.045984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.045996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.046128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.046140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.046410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.046422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.046647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.046659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.046950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.046963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.047212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.047230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.047480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.047494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.047747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.047760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.047965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.047978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.048180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.048194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.048493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.048507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.048691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.048704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.048893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.048907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.049170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.049183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.049468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.049487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.224 [2024-07-15 22:43:22.049771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.224 [2024-07-15 22:43:22.049785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.224 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.050042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.050055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.050339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.050353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.050601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.050614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.050796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.050809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.051013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.051026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.051279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.051293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.051478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.051492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.051632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.051646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.051844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.051858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.052134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.052148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.052404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.052418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.052614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.052627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.052927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.052941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.053212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.053238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.053426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.053439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.053653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.053667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.053892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.053905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.054095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.054108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.054356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.054370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.054552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.054565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.054669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.054682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.054953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.054967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.055265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.055278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.055570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.055584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.055835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.055848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.056186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.056211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.056437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.056449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.056699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.056709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.056904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.056914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.057152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.057162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.057376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.057387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.057576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.057586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.057776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.057786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.057895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.057905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.058143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.058153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.058358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.058369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.058557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.058567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.058807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.058817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.059066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.059079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.059298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.059309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.059508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.059518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.059650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.059659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.059867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.059877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.059989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.059999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.060127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.060138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.060275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.060285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.060467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.060477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.060606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.060617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.060856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.060866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.061054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.061064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.061251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.061262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.061440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.061450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.061653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.061663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.061786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.061796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.062063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.062073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.062285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.062295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.062528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.062538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.225 [2024-07-15 22:43:22.062649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.225 [2024-07-15 22:43:22.062659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.225 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.062833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.062843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.063028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.063038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.063210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.063220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.063513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.063523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.063724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.063735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.063855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.063865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.063968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.063978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.064193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.064203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.064486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.064497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.064736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.064746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.064863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.064872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.065057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.065067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.065239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.065249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.065491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.065501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.065700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.065710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.065891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.065901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.066090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.066100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.066284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.066293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.066468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.066478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.066728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.066738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.066931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.066943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.067060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.067069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.067263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.067273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.067558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.067568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.067676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.067685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.067801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.067811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.068022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.068032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.068272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.068283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.068456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.068466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.068592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.068601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.068788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.068798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.069042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.069052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.069165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.069175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.069295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.069305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.069500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.069510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.069751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.069761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.069937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.069947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.070054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.070064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.070281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.070291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.070415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.070424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.070598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.070608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.070742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.070752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.070927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.070937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.071108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.071118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.071244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.071254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.071516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.071525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.071700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.071710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.071902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.071912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.072028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.072039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.072156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.072165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.072290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.072300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.226 qpair failed and we were unable to recover it. 00:26:58.226 [2024-07-15 22:43:22.072482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.226 [2024-07-15 22:43:22.072492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.072704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.072714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.072911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.072920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.073025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.073035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.073277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.073287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.073477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.073486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.073689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.073699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.073840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.073850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.074027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.074037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.074213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.074228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.074430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.074440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.074550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.074560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.074738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.074748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.074928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.074938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.075181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.075191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.075313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.075324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.075565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.075575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.075690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.075700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.075874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.075884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.076029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.076038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.076158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.076168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.076419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.076430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.076615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.076625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.076760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.076771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.076962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.076972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.077060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.077069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.077281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.077292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.077415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.077425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.077550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.077559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.077820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.077830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.077971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.077981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.078098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.078108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.078233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.078244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.078440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.078450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.078710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.078720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.078964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.078974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.079178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.079188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.079395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.079406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.079654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.079664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.079775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.079785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.080001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.080011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.080185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.080195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.080457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.080468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.080671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.080681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.080898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.080907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.081098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.081108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.081297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.081307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.081496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.081506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.081694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.081704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.081918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.081930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.082072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.082081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.082282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.082292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.082476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.082486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.082775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.082784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.083041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.083051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.083320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.083330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.083567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.083576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.227 [2024-07-15 22:43:22.083770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.227 [2024-07-15 22:43:22.083780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.227 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.083901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.083911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.084113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.084123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.084319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.084329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.084508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.084518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.084700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.084710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.084827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.084838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.085029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.085039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.085282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.085293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.085451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.085461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.085668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.085679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.085801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.085810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.085989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.085999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.086264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.086275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.086473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.086482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.086721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.086731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.086862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.086872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.087010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.087020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.087194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.087204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.087365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.087400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.087612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.087644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.087873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.087890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.088075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.088089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.088289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.088303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.088501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.088514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.088782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.088796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.088919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.088933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.089118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.089131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.089270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.089283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.089563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.089577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.089706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.089719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.089970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.089983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.090183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.090196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.090495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.090509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.090707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.090720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.090998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.091012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.091187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.091200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.091406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.091420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.091607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.091620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.091749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.091762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.092010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.092023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.092149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.092162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.092288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.092302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.092536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.092550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.092798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.092811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.093027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.093041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.093158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.093171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.093379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.093393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.093609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.093622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.093825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.093838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.093964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.093977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.094078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.094091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.094287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.094300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.094495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.094508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.094636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.094650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.094767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.094780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.094983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.094997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.095203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.095216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.228 [2024-07-15 22:43:22.095421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.228 [2024-07-15 22:43:22.095435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.228 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.095700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.095716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.095986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.096000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.096248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.096262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.096463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.096477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.096676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.096690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.096937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.096951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.097141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.097155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.097355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.097368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.097645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.097658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.097863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.097876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.098060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.098073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.098210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.098227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.098451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.098464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.098663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.098676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.098933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.098947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.099077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.099090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.099345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.099359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.099556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.099570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.099839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.099852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.100103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.100116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.100299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.100313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.100452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.100465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.100727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.100741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.100991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.101004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.101185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.101198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.101318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.101331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.101578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.101592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.101781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.101795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.102042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.102055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.102192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.102206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.102407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.102421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.102641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.102654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.102844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.102859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.102979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.102991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.103174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.103187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.103403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.103417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.103615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.103628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.103776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.103790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.104061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.104074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.104206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.104220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.104422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.104438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.104621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.104634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.104824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.104837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.104974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.104988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.105173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.105186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.105481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.105495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.105688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.105701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.105925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.105938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.106170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.106184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.106462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.106476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.106682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.106695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.106825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.229 [2024-07-15 22:43:22.106838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.229 qpair failed and we were unable to recover it. 00:26:58.229 [2024-07-15 22:43:22.107052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.107065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.107258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.107272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.107467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.107481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.107594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.107607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.107826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.107839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.108110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.108124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.108307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.108321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.108579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.108592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.108799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.108813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.109013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.109026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.109155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.109168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.109352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.109366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.109614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.109627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.109764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.109777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.109977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.109991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.110243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.110257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.110424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.110438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.110659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.110672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.110942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.110955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.111140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.111153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.111323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.111337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.111586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.111599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.111744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.111757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.111969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.111982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.112108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.112121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.112394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.112408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.112590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.112603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.112849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.112862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.113045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.113061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.113175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.113189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.113332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.113346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.113640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.113653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.113852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.113865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.114004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.114018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.114239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.114253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.114439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.114453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.114663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.114676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.114873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.114887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.115035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.115048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.115172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.115185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.115389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.115403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.115591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.115604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.115732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.115746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.115997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.116010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.116207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.116221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.116444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.116457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.116603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.116616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.116815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.116828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.117023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.117037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.117236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.117251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.117382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.117395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.117552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.117565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.117747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.117761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.117892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.117906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.118106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.118119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.118335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.118349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.118534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.118547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.118728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.230 [2024-07-15 22:43:22.118741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.230 qpair failed and we were unable to recover it. 00:26:58.230 [2024-07-15 22:43:22.118993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.119006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.119201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.119214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.119307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.119320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.119499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.119512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.119721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.119735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.119987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.120000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.120191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.120204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.120392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.120407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.120595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.120609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.120911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.120924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.121023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.121039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.121221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.121240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.121431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.121445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.121587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.121600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.121794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.121807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.121990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.122004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.122271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.122286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.122558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.122571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.122822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.122835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.122976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.122989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.123185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.123198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.123422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.123436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.123616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.123629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.123851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.123864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.124054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.124067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.124269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.124283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.124558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.124571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.124714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.124727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.124976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.125006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.125323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.125353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.125601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.125615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.125744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.125757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.125945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.125979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.126275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.126306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.126458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.126488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.126733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.126746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.126927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.126941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.127235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.127267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.127443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.127471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.127688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.127717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.127904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.127933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.128249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.128281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.128518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.128547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.128757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.128771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.128897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.128911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.129134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.129147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.129346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.129359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.129570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.129583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.129830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.129843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.130035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.130048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.130236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.130252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.130462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.130491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.130748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.130776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.131035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.131049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.131241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.131268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.131538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.131551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.131689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.131702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.231 [2024-07-15 22:43:22.131950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.231 [2024-07-15 22:43:22.131963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.231 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.132109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.132122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.132326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.132339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.132471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.132484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.132699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.132728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.132978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.133007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.133241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.133272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.133509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.133538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.133763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.133792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.134017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.134030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.134232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.134245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.134428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.134441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.134712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.134726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.134992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.135021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.135191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.135220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.135517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.135546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.135799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.135828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.135944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.135973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.136124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.136153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.136324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.136370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.136568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.136597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.136780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.136809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.136995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.137025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.137310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.137340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.137622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.137636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.137842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.137855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.137997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.138010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.138099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.138112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.138315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.138328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.138478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.138491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.138742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.138755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.138894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.138907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.139158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.139187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.139523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.139558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.139767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.139796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.139960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.139989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.140210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.140259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.140560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.140590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.140874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.140903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.141137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.141166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.141448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.141479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.141601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.141615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.141813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.141827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.141957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.141971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.142244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.142258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.232 [2024-07-15 22:43:22.142395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.232 [2024-07-15 22:43:22.142409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.232 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.142542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.142556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.142811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.142840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.143112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.143141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.143395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.143425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.143679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.143692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.143887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.143900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.144153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.144181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.144424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.144457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.144696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.144725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.144951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.144980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.145244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.145275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.145505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.145534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.145698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.145727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.146007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.146020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.146247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.146268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.146521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.146534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.146733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.146746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.147013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.147026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.147212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.147229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.147393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.147407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.147609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.147622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.147848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.147861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.148081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.148095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.148282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.148297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.148436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.148449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.148597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.148610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.148799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.148812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.149025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.149059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.149238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.149269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.149506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.149535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.149692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.149705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.149885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.149898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.150092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.150106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.150358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.150372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.150587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.150601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.150736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.150749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.151001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.151030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.151262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.151293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.151509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.151537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.151790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.151803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.151999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.152013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.152290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.152304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.152501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.152515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.152716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.152729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.152876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.152889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.153160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.153189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.153499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.153530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.153702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.153715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.154001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.154030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.154253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.154284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.154515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.154528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.154802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.154815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.154951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.154964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.155239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.155253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.155505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.155540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.155784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.233 [2024-07-15 22:43:22.155818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.233 qpair failed and we were unable to recover it. 00:26:58.233 [2024-07-15 22:43:22.155991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.156018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.156148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.156160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.156430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.156441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.156640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.156670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.156938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.156968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.157206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.157247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.157471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.157501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.157763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.157793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.158019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.158049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.158217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.158255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.158484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.158494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.158675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.158688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.158833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.158843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.159037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.159047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.159262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.159272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.159530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.159541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.159832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.159842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.160086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.160096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.160420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.160430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.160724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.160754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.160941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.160971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.161275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.161306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.161629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.161639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.161901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.161911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.162132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.162142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.162406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.162416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.162528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.162538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.162754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.162795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.163013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.163043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.163205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.163243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.234 [2024-07-15 22:43:22.163462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.234 [2024-07-15 22:43:22.163491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.234 qpair failed and we were unable to recover it. 00:26:58.515 [2024-07-15 22:43:22.163678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.515 [2024-07-15 22:43:22.163708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.515 qpair failed and we were unable to recover it. 00:26:58.515 [2024-07-15 22:43:22.163989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.515 [2024-07-15 22:43:22.164020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.515 qpair failed and we were unable to recover it. 00:26:58.515 [2024-07-15 22:43:22.164260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.515 [2024-07-15 22:43:22.164291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.515 qpair failed and we were unable to recover it. 00:26:58.515 [2024-07-15 22:43:22.164578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.515 [2024-07-15 22:43:22.164608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.515 qpair failed and we were unable to recover it. 00:26:58.515 [2024-07-15 22:43:22.164923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.515 [2024-07-15 22:43:22.164953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.515 qpair failed and we were unable to recover it. 00:26:58.515 [2024-07-15 22:43:22.165201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.515 [2024-07-15 22:43:22.165239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.515 qpair failed and we were unable to recover it. 00:26:58.515 [2024-07-15 22:43:22.165523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.515 [2024-07-15 22:43:22.165553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.515 qpair failed and we were unable to recover it. 00:26:58.515 [2024-07-15 22:43:22.165856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.515 [2024-07-15 22:43:22.165924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.515 qpair failed and we were unable to recover it. 00:26:58.515 [2024-07-15 22:43:22.166091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.515 [2024-07-15 22:43:22.166125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.515 qpair failed and we were unable to recover it. 00:26:58.515 [2024-07-15 22:43:22.166315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.515 [2024-07-15 22:43:22.166349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.515 qpair failed and we were unable to recover it. 00:26:58.515 [2024-07-15 22:43:22.166603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.515 [2024-07-15 22:43:22.166633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.515 qpair failed and we were unable to recover it. 00:26:58.515 [2024-07-15 22:43:22.166821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.166852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.167088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.167118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.167359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.167392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.167576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.167606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.167897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.167928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.168214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.168258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.168497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.168537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.168756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.168770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.169061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.169075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.169293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.169307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.169532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.169546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.169806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.169837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.170062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.170092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.170325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.170356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.170534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.170564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.170775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.170789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.170972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.170986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.171247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.171279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.171529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.171559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.171838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.171868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.172089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.172119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.172339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.172370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.172586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.172615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.172924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.172959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.173245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.173277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.173584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.173614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.173857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.173887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.174217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.174256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.174492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.174522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.174828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.174858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.175045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.175074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.175307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.175337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.175491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.175508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.175720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.175751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.176055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.176086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.176329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.176359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.176643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.176657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.176879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.176893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.177086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.177100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.177235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.177249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.516 [2024-07-15 22:43:22.177431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.516 [2024-07-15 22:43:22.177444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.516 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.177694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.177710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.177968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.177998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.178174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.178204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.178477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.178507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.178757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.178773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.178942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.178956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.179178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.179207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.179380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.179411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.179627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.179656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.179946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.179962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.180158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.180172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.180368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.180383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.180573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.180603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.180885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.180915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.181131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.181161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.181390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.181421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.181661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.181691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.181946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.181960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.182176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.182190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.182386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.182400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.182534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.182572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.182875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.182905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.183060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.183089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.183322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.183353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.183635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.183649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.183799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.183812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.183947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.183961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.184158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.184172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.184368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.184382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.184580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.184594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.184730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.184744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.184893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.184906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.185176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.185206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.185432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.185462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.185698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.185728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.185962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.185992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.186230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.186266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.186574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.186604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.186832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.186863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.187086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.187116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.517 qpair failed and we were unable to recover it. 00:26:58.517 [2024-07-15 22:43:22.187335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.517 [2024-07-15 22:43:22.187366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.187591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.187604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.187863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.187877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.188006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.188019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.188159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.188173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.188368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.188382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.188508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.188522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.188797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.188827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.189142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.189172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.189372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.189403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.189712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.189742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.189960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.189989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.190236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.190267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.190521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.190551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.190778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.190807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.191042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.191072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.191305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.191335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.191558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.191587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.191819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.191833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.192034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.192047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.192253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.192268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.192452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.192466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.192735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.192764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.192939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.192969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.193145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.193175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.193346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.193377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.193666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.193695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.193971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.193985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.194260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.194274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.194457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.194471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.194653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.194666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.194893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.194924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.195233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.195263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.195546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.195576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.195730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.195760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.195984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.196014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.196262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.196294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.196582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.196650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.196967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.196993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.197269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.197280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.518 qpair failed and we were unable to recover it. 00:26:58.518 [2024-07-15 22:43:22.197421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.518 [2024-07-15 22:43:22.197431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.197555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.197565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.197754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.197764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.198009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.198038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.198353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.198383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.198536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.198567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.198866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.198895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.199208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.199250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.199559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.199590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.199873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.199902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.200136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.200174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.200433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.200465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.200722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.200752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.201032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.201042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.201218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.201232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.201372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.201382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.201643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.201667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.201788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.201818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.201978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.202008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.202185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.202215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.202456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.202486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.202697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.202706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.202891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.202901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.203018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.203028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.203218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.203233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.203474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.203484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.203679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.203689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.203936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.203946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.204172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.204182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.204427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.204438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.204707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.204718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.204923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.204933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.205075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.205085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.205335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.205345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.205531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.205541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.205730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.205740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.205880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.205890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.206027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.206062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.519 qpair failed and we were unable to recover it. 00:26:58.519 [2024-07-15 22:43:22.206321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.519 [2024-07-15 22:43:22.206351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.206569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.206580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.206771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.206781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.206958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.206968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.207104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.207114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.207307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.207317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.207606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.207636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.207949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.207978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.208184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.208214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.208451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.208481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.208764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.208793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.209014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.209024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.209198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.209208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.209413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.209444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.209743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.209773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.209935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.209965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.210183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.210213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.210505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.210536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.210751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.210781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.210997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.211007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.211283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.211293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.211480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.211490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.211680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.211690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.211832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.211842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.212017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.212027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.212215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.212231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.212363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.212374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.212523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.212533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.212657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.212667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.212840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.212850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.213039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.213049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.213233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.213243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.213530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.213560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.213804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.213834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.214120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.214150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.214451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.520 [2024-07-15 22:43:22.214482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.520 qpair failed and we were unable to recover it. 00:26:58.520 [2024-07-15 22:43:22.214785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.214814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.215098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.215128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.215401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.215432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.215667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.215703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.215947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.215977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.216264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.216294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.216526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.216555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.216856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.216886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.217062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.217092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.217409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.217440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.217677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.217706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.217873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.217902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.218199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.218209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.218476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.218486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.218680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.218689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.218952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.218962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.219202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.219212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.219423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.219434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.219630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.219640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.219851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.219861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.219997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.220007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.220126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.220136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.220262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.220272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.220538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.220548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.220674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.220684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.220874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.220884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.221064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.221093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.221258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.221289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.221508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.221538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.221777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.221786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.221972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.221982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.222154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.222164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.222353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.222363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.222553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.222563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.222764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.222794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.222946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.222975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.223233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.223263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.223490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.223520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.223755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.223785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.224068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.224078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.521 qpair failed and we were unable to recover it. 00:26:58.521 [2024-07-15 22:43:22.224365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.521 [2024-07-15 22:43:22.224396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.224627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.224657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.224913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.224942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.225113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.225148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.225401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.225431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.225737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.225767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.226067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.226096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.226262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.226292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.226603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.226633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.226929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.226959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.227137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.227166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.227387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.227417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.227580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.227610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.227915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.227945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.228111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.228140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.228398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.228429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.228608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.228639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.228818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.228847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.228971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.229001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.229234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.229265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.229429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.229459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.229690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.229720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.229939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.229969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.230202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.230239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.230549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.230579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.230890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.230919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.231223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.231260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.231440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.231470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.231703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.231733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.232068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.232097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.232323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.232354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.232606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.232636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.232862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.232892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.233055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.233085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.233271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.233302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.233587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.233616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.233777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.233807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.234091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.234120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.234409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.234440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.234685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.234715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.522 [2024-07-15 22:43:22.234892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.522 [2024-07-15 22:43:22.234921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.522 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.235140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.235149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.235407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.235417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.235598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.235611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.235799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.235809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.236000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.236011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.236148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.236178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.236445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.236476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.236761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.236791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.237037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.237046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.237117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.237126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.237368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.237378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.237554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.237565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.237810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.237839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.238010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.238040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.238264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.238294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.238529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.238559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.238884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.238914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.239137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.239147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.239339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.239350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.239461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.239471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.239670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.239680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.239854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.239864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.240102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.240112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.240316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.240327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.240523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.240554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.240775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.240805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.240970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.241000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.241228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.241238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.241350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.241360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.241544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.241555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.241745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.241755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.241956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.241966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.242095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.242115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.242364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.242395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.242644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.242673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.242893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.242922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.243083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.243112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.243413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.243444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.243661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.243690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.243912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.243942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.523 qpair failed and we were unable to recover it. 00:26:58.523 [2024-07-15 22:43:22.244175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.523 [2024-07-15 22:43:22.244185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.244316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.244326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.244573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.244608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.244837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.244867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.245196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.245232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.245475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.245504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.245724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.245754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.246011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.246021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.246148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.246158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.246289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.246300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.246544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.246554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.246729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.246739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.246935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.246965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.247266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.247296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.247462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.247492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.247768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.247778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.248030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.248039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.248180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.248190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.248367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.248378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.248618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.248628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.248832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.248842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.248962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.248972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.249327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.249357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.249588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.249618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.249847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.249877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.250118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.250147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.250431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.250462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.250740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.250750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.250960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.250970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.251173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.251183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.251367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.251397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.251614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.251644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.251897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.251927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.252155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.252185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.252410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.252440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.524 qpair failed and we were unable to recover it. 00:26:58.524 [2024-07-15 22:43:22.252667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.524 [2024-07-15 22:43:22.252696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.252862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.252892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.253107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.253137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.253442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.253473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.253707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.253737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.253963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.253973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.254166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.254176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.254355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.254368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.254631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.254661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.254883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.254913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.255089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.255119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.255277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.255288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.255530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.255540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.255668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.255678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.255943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.255953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.256073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.256083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.256217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.256248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.256440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.256450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.256588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.256599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.256775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.256785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.256967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.256977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.257177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.257187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.257360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.257370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.257556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.257566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.257686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.257696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.257884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.257894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.258091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.258101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.258221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.258241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.258435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.258445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.258583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.258593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.258766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.258775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.258896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.258907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.259112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.259122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.259248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.259259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.259386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.259396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.259578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.259588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.259815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.259845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.260005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.260035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.260201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.260238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.260547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.260577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.525 [2024-07-15 22:43:22.260877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.525 [2024-07-15 22:43:22.260887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.525 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.261008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.261018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.261191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.261201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.261289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.261298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.261423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.261433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.261678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.261688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.261821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.261831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.261950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.261962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.262154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.262164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.262258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.262268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.262454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.262464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.262715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.262744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.262963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.262992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.263208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.263247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.263465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.263495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.263729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.263758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.263983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.264013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.264176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.264205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.264471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.264502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.264804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.264834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.265083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.265092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.265219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.265234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.265408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.265418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.265679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.265689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.265878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.265888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.266063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.266073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.266273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.266304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.266484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.266513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.266669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.266699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.266989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.266999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.267138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.267147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.267261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.267272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.267405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.267416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.267604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.267636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.267883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.267912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.268128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.268157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.268380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.268410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.268577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.268606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.268904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.268933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.269112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.269142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.269425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.269455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.526 qpair failed and we were unable to recover it. 00:26:58.526 [2024-07-15 22:43:22.269682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.526 [2024-07-15 22:43:22.269712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.269997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.270026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.270262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.270293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.270587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.270616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.270785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.270815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.271050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.271079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.271393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.271428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.271710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.271740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.272023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.272053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.272343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.272353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 160351 Killed "${NVMF_APP[@]}" "$@" 00:26:58.527 [2024-07-15 22:43:22.272576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.272588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.272794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.272804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.272928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.272938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.273131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.273159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:58.527 [2024-07-15 22:43:22.273341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.273373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:58.527 [2024-07-15 22:43:22.273627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.273658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.273822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.273833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:58.527 [2024-07-15 22:43:22.273940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.273951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:58.527 [2024-07-15 22:43:22.274197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.274210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.274424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.274436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:58.527 [2024-07-15 22:43:22.274701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.274712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.274841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.274850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.274992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.275002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.275196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.275207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.275389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.275400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.275530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.275540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.275729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.275739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.275881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.275891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.276078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.276088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.276275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.276306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.276592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.276621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.276851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.276860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.276967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.276988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.277163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.277176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.277316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.277326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.277505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.277515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.277715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.277725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.277915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.277927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.527 [2024-07-15 22:43:22.278037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.527 [2024-07-15 22:43:22.278047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.527 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.278165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.278176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.278298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.278313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.278446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.278457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.278651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.278661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.278874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.278883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.279062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.279072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.279183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.279193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.279398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.279408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.279547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.279557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.279765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.279775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.279902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.279912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.280088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.280099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.280218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.280239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.280428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.280438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.280634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.280644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.280824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.280834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=161198 00:26:58.528 [2024-07-15 22:43:22.281012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.281026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.281164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.281174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 161198 00:26:58.528 [2024-07-15 22:43:22.281362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.281376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:58.528 [2024-07-15 22:43:22.281506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.281517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 161198 ']' 00:26:58.528 [2024-07-15 22:43:22.281700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.281713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.281902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.281913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.528 [2024-07-15 22:43:22.282050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.282063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.282247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.282259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:58.528 [2024-07-15 22:43:22.282378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.282391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.282585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.282598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.528 [2024-07-15 22:43:22.282723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.282734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.282860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.282870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:58.528 [2024-07-15 22:43:22.282999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.283012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 22:43:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:58.528 [2024-07-15 22:43:22.283282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.283297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.528 qpair failed and we were unable to recover it. 00:26:58.528 [2024-07-15 22:43:22.283423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.528 [2024-07-15 22:43:22.283434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.283566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.283576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.283666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.283675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.283795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.283804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.283934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.283945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.284059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.284068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.284286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.284296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.284440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.284450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.284580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.284589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.284784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.284796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.284929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.284941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.285085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.285096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.285216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.285230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.285346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.285356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.285553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.285564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.285683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.285695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.285880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.285891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.286071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.286081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.286269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.286280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.286470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.286480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.286607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.286617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.286799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.286810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.287009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.287020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.287222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.287237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.287372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.287382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.287502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.287513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.287685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.287696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.287898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.287909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.288044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.288054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.288238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.288249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.288440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.288451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.288581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.288591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.288746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.288757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.288890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.288900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.289030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.289039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.289176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.289186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.289314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.289324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.289503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.289537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.289687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.289719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.289831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.289847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.290099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.529 [2024-07-15 22:43:22.290114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.529 qpair failed and we were unable to recover it. 00:26:58.529 [2024-07-15 22:43:22.290414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.290429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.290619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.290634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.290768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.290782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.290974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.290989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.291172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.291188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.291338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.291352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.291485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.291499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.291692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.291706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.291839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.291853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.291984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.291999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.292140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.292155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.292287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.292301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.292552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.292566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.292768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.292782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.292907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.292920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.293121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.293131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.293248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.293259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.293468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.293478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.293656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.293666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.293953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.293965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.294174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.294185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.294307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.294317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.294436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.294447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.294694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.294703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.294813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.294822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.294967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.294977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.295182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.295192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.295442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.295453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.295568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.295578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.295685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.295696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.295822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.295832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.295965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.295975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.296213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.296223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.296355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.296366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.296495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.296505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.296627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.296637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.296754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.296765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.296878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.296890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.297137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.297147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.297263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.530 [2024-07-15 22:43:22.297272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.530 qpair failed and we were unable to recover it. 00:26:58.530 [2024-07-15 22:43:22.297386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.297396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.297586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.297596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.297802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.297812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.297918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.297929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.298210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.298220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.298300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.298309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.298489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.298499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.298791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.298801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.299046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.299056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.299247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.299257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.299459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.299469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.299595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.299605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.299778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.299787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.299999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.300010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.300159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.300169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.300310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.300320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.300445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.300455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.300619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.300630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.300748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.300758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.300865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.300875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.301130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.301141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.301253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.301263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.301505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.301516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.301732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.301742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.301960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.301970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.302080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.302089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.302238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.302248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.302432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.302442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.302620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.302630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.302827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.302837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.303026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.303036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.303162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.303172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.303310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.303320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.303508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.303518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.303632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.303641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.303833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.303843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.303964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.303977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.304228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.304241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.304441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.304451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.304693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.531 [2024-07-15 22:43:22.304703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.531 qpair failed and we were unable to recover it. 00:26:58.531 [2024-07-15 22:43:22.304820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.304831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.304953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.304963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.305142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.305152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.305279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.305290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.305381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.305391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.305603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.305614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.305740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.305749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.305866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.305876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.306061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.306071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.306333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.306343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.306537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.306548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.306667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.306677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.306870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.306880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.307077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.307088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.307329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.307339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.307477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.307487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.307629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.307639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.307795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.307804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.307995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.308005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.308141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.308151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.308279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.308290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.308416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.308426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.308614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.308625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.308739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.308749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.308962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.308972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.309161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.309171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.309262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.309271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.309412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.309422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.309531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.309542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.309664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.309675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.309764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.309774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.309877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.309887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.309988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.309997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.310114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.310124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.310317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.310328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.310533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.310543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.532 qpair failed and we were unable to recover it. 00:26:58.532 [2024-07-15 22:43:22.310661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.532 [2024-07-15 22:43:22.310672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.310828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.310838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.310951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.310961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.311241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.311251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.311375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.311385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.311523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.311533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.311793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.311803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.312014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.312024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.312202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.312212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.312388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.312398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.312535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.312545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.312734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.312744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.312930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.312940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.313032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.313042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.313159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.313169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.313294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.313304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.313503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.313513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.313705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.313715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.313888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.313898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.314014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.314024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.314169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.314180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.314286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.314298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.314507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.314517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.314639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.314651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.314789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.314799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.314917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.314926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.315051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.315060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.315157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.315166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.315464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.315474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.315667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.315677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.315807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.315817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.315925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.315935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.316066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.316077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.316200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.316209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.316384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.316395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.316521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.316530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.316658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.316668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.316854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.316864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.317103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.317113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.317227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.533 [2024-07-15 22:43:22.317238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.533 qpair failed and we were unable to recover it. 00:26:58.533 [2024-07-15 22:43:22.317415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.317427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.317633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.317642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.317767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.317777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.317996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.318006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.318212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.318222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.318348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.318359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.318573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.318583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.318720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.318730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.318841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.318851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.318974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.318984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.319120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.319130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.319259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.319270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.319413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.319423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.319525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.319534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.319804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.319813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.319990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.320001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.320121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.320131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.320324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.320334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.320519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.320529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.320808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.320818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.320953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.320963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.321138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.321148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.321408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.321418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.321600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.321610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.321784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.321793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.321913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.321922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.322106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.322116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.322243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.322253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.322433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.322443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.322629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.322639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.322713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.322723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.322848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.322858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.323110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.323120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.323296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.323306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.323502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.323513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.323761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.323771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.324014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.324023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.324217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.324237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.324380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.324389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.324641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.324651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.534 [2024-07-15 22:43:22.324783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.534 [2024-07-15 22:43:22.324794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.534 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.324973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.324983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.325106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.325115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.325304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.325314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.325589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.325598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.325682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.325697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.325814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.325824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.325912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.325922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.326119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.326129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.326274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.326285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.326462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.326473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.326559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.326568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.326708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.326718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.326893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.326903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.327104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.327114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.327286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.327297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.327458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.327468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.327605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.327615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.327737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.327747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.327939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.327949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.328138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.328148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.328240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.328251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.328308] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:26:58.535 [2024-07-15 22:43:22.328355] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.535 [2024-07-15 22:43:22.328372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.328385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.328490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.328500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.328677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.328688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.328960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.328970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.329116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.329126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.329374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.329385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.329577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.329587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.329774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.329785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.329952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.329962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.330083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.330094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.330273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.330284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.330462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.330473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.330601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.330612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.330823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.330837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.331036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.331049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.331171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.331184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.331370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.331383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.331633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.331668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.535 [2024-07-15 22:43:22.331862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.535 [2024-07-15 22:43:22.331879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.535 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.332098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.332114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.332275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.332293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.332427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.332444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.332663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.332678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.332894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.332909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.333049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.333064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.333313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.333329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.333464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.333479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.333679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.333695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.333839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.333855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.333988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.334003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.334186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.334206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.334401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.334417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.334515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.334529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.334724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.334739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.334950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.334966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.335168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.335183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.335338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.335354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.335485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.335500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.335630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.335645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.335832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.335848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.335961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.335975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.336115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.336129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.336415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.336429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.336612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.336624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.336822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.336834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.336953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.336965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.337094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.337105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.337179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.337190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.337407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.337419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.337689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.337700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.337887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.337899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.338141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.338153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.338318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.338330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.338506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.338519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.338710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.338721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.338895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.338907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.339047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.339060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.339260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.339279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.339479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.339494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.536 qpair failed and we were unable to recover it. 00:26:58.536 [2024-07-15 22:43:22.339633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.536 [2024-07-15 22:43:22.339649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.339769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.339784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.339968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.339983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.340705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.340733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.340863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.340876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.340996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.341007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.341264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.341275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.341351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.341362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.341488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.341498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.341624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.341635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.341758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.341769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.341954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.341967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.342154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.342166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.342285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.342296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.342565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.342576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.342782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.342793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.343021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.343032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.343216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.343230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.343420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.343430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.343631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.343642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.343829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.343839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.344062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.344073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.344221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.344238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.344370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.344380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.344595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.344605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.344727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.344737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.344901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.344912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.345111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.345122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.345259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.345270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.345377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.345387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.345573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.345583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.345704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.345715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.345904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.345914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.346153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.346163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.537 [2024-07-15 22:43:22.346349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.537 [2024-07-15 22:43:22.346360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.537 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.346543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.346554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.346748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.346758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.346961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.346971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.347099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.347112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.347331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.347342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.347452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.347463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.347663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.347673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.347854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.347864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.348057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.348067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.348250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.348261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.348375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.348385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.348512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.348522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.348669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.348679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.348799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.348809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.349089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.349100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.349286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.349297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.349428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.349439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.349575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.349585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.349763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.349774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.349865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.349876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.349984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.349994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.350198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.350208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.350401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.350413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.350597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.350607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.350802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.350813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.350994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.351004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.351178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.351188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.351312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.351322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.351482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.351493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.351616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.351627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.351818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.351829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.352081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.352091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.352186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.352197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.352307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.352318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.352517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.352528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.352705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.352715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.352984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.352994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.353188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.353198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.353397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.353408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.353564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.353575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.538 [2024-07-15 22:43:22.353768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.538 [2024-07-15 22:43:22.353779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.538 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.353895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.353906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.354083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.354094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.354299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.354313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.354489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.354499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.354688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.354699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.354828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.354838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.355012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.355023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.355208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.355218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.355417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.355427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.355551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.355562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.355684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.355695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.355868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.355878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.356054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.356064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.356180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.356190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.356299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.356310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.356516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.356526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.356797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.356807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.356898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.356908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.357049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.357059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.357176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.357186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.357327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.357338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.357429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.357439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.357512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.357522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.357652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.357663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.357911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.357921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.539 [2024-07-15 22:43:22.358194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.358206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.358325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.358336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.358533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.358544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.358816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.358826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.359019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.359030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.359185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.359196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.359305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.359316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.359506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.359517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.359807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.359817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.359957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.359968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.360092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.360102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.360289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.360300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.360481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.360494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.360629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.360639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.539 [2024-07-15 22:43:22.360908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.539 [2024-07-15 22:43:22.360918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.539 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.361131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.361141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.361250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.361268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.361409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.361419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.361566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.361576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.361704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.361721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.361835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.361845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.361986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.361996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.362191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.362202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.362409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.362420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.362621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.362631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.362852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.362862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.363048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.363058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.363144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.363154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.363238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.363248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.363375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.363385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.363509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.363521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.363653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.363663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.363850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.363859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.363967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.363976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.364173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.364183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.364307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.364317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.364493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.364504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.364637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.364647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.364770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.364780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.364889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.364898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.365077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.365088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.365200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.365209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.365355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.365365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.365546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.365556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.365677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.365687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.365819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.365830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.365954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.365965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.366146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.366156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.366332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.366343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.366463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.366474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.366587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.366597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.366873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.366883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.367073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.367083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.367202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.367212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.367372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.367383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.540 [2024-07-15 22:43:22.367497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.540 [2024-07-15 22:43:22.367508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.540 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.367681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.367690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.367884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.367909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.368038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.368054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.368269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.368285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.368435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.368449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.368669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.368684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.368886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.368900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.369088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.369102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.369260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.369276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.369457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.369472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.369578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.369592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.369795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.369810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.369928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.369941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.370190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.370205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.370456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.370471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.370599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.370613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.370806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.370819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.371102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.371117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.371371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.371385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.371473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.371486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.371637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.371651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.371851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.371864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.371955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.371967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.372090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.372101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.372279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.372290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.372402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.372412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.372484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.372494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.372694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.372705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.372831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.372847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.373031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.373045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.373256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.373271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.373390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.373404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.373601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.373615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.373758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.373772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.374079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.541 [2024-07-15 22:43:22.374091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.541 qpair failed and we were unable to recover it. 00:26:58.541 [2024-07-15 22:43:22.374350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.374361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.374492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.374503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.374577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.374589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.374685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.374695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.374874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.374884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.375058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.375068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.375200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.375210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.375412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.375422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.375611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.375621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.375749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.375759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.375828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.375838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.376049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.376059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.376188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.376198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.376418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.376428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.376493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.376506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.376666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.376676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.376869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.376878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.377003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.377013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.377133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.377144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.377258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.377268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.377390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.377400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.377574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.377584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.377773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.377783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.377905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.377915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.378037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.378048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.378173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.378183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.378300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.378310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.378425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.378435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.378530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.378540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.378658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.378670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.378853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.378864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.378997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.379007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.379144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.379155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.379311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.379324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.379454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.379463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.379578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.379588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.379674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.379685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.379825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.379835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.379957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.379967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.380149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.542 [2024-07-15 22:43:22.380159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.542 qpair failed and we were unable to recover it. 00:26:58.542 [2024-07-15 22:43:22.380351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.380362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.380490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.380499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.380671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.380682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.380806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.380816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.380927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.380937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.381049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.381059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.381240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.381250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.381446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.381457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.381592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.381602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.381726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.381736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.381871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.381880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.381992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.382002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.382241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.382251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.382381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.382391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.382515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.382525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.382685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.382695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.382823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.382832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.382951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.382960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.383152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.383162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.383288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.383298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.383424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.383435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.383523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.383533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.383648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.383658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.383795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.383805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.383929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.383939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.384005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.384014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.384192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.384202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.384294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.384304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.384430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.384440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.384563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.384573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.543 [2024-07-15 22:43:22.384688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.543 [2024-07-15 22:43:22.384698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.543 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.384879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.384889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.385014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.385024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.385201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.385212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.385340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.385350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.385540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.385550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.385660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.385670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.385842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.385852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.385973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.385983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.386160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.386170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.386349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.386359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.386561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.386571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.386746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.386756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.386835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.386845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.387039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.387049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.387198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.387217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.387414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.387424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.387518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.387528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.387601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.387610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.387876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.387886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.387993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.388003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.388176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.388186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.388365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.388375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.388496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.388506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.388686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.388696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.388829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.388838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.389019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.544 [2024-07-15 22:43:22.389029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.544 qpair failed and we were unable to recover it. 00:26:58.544 [2024-07-15 22:43:22.389240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.389250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.389367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.389377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.389599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.389610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.389800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.389810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.389985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.389994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.390240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.390250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.390374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.390384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.390558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.390568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.390784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.390795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.390997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.391007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.391280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.391290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.391398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.391415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.391590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.391602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.391807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.391817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.391944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.391954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.392138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.392147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.392265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.392278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.392402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.392413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.392600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.392610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.392752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.392762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.392875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.392884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.392969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.392979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.393101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.393110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.393238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.393248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.393380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.393390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.393574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.393585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.545 qpair failed and we were unable to recover it. 00:26:58.545 [2024-07-15 22:43:22.393775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.545 [2024-07-15 22:43:22.393786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.393897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.393906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.394035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.394046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.394159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.394169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.394284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.394295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.394422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.394433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.394681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.394692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.394773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.394783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.394908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.394919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.395029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.395039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.395217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.395231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.395478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.395488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.395566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.395576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.395760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.395770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.395852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.395862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.396015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.396025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.396136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.396146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.396336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.396346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.396484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.396494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.396671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.396681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.396954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.396965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.397106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.397117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.397262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.397272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.397452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.397463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.397658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.397670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.397797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.397807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.398068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.546 [2024-07-15 22:43:22.398079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.546 qpair failed and we were unable to recover it. 00:26:58.546 [2024-07-15 22:43:22.398276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.398287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.398478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.398488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.398683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.398694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.398851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.398864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.398980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.398991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.399119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.399130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.399222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.399236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.399425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.399435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.399610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.399620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.399804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.399815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.400082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.400092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.400307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.400318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.400438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.400449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.400522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.400531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.400652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.400663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.400859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.400870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.400989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:58.547 [2024-07-15 22:43:22.401008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.401020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.401209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.401219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.401350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.547 [2024-07-15 22:43:22.401361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.547 qpair failed and we were unable to recover it. 00:26:58.547 [2024-07-15 22:43:22.401503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.401513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.401625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.401635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.401818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.401829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.402002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.402012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.402258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.402268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.402391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.402401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.402642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.402652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.402782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.402792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.403039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.403050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.403233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.403243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.403402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.403411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.403590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.403601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.403728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.403739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.403900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.403911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.404034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.404045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.404246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.404258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.404370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.404381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.404559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.404570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.404727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.404737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.405008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.405019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.405258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.405269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.405451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.405462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.405702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.405712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.405954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.405965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.406149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.406160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.406428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.548 [2024-07-15 22:43:22.406439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.548 qpair failed and we were unable to recover it. 00:26:58.548 [2024-07-15 22:43:22.406624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.406634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.406748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.406759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.406978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.406988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.407114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.407124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.407302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.407312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.407439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.407449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.407542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.407552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.407753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.407765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.407890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.407901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.408035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.408046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.408160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.408171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.408300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.408314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.408489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.408501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.408632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.408643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.408821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.408831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.409100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.409112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.409326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.409337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.409450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.409461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.409590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.409601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.409789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.409800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.409935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.409946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.410139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.410152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.410283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.410294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.410416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.410427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.410628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.410640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.410937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.410948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.411081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.549 [2024-07-15 22:43:22.411092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.549 qpair failed and we were unable to recover it. 00:26:58.549 [2024-07-15 22:43:22.411231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.411243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.411365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.411376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.411499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.411509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.411617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.411628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.411700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.411710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.411891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.411901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.412029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.412039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.412162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.412172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.412247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.412257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.412389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.412400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.412547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.412558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.412749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.412760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.412935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.412946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.413121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.413131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.413261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.413272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.413540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.413558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.413808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.413819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.413920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.413931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.414127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.414138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.414327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.414338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.414532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.414542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.414762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.414772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.414960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.414971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.415161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.415170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.415419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.415434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.415579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.415590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.415782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.550 [2024-07-15 22:43:22.415792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.550 qpair failed and we were unable to recover it. 00:26:58.550 [2024-07-15 22:43:22.416040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.416051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.416254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.416266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.416446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.416457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.416585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.416596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.416839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.416850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.417038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.417049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.417195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.417206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.417393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.417404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.417602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.417613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.417794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.417805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.417983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.417993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.418185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.418195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.418317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.418328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.418534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.418544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.418655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.418665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.418861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.418872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.419061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.419071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.419294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.419305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.419519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.419530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.419667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.419677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.419787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.419797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.419921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.419932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.420103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.420114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.420312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.420323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.420497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.420508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.420640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.420651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.420773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.420783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.551 [2024-07-15 22:43:22.420910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.551 [2024-07-15 22:43:22.420920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.551 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.421108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.421118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.421253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.421264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.421456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.421467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.421578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.421589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.421786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.421796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.421939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.421950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.422086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.422096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.422165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.422175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.422372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.422383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.422564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.422577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.422772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.422782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.422968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.422979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.423099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.423110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.423305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.423316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.423445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.423455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.423663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.423673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.423745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.423755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.423996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.424007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.424183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.424193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.424316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.424326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.424454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.424465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.424617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.424627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.424802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.424813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.425032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.425042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.425215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.425229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.425409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.425419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.425568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.425579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.425760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.425771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.425964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.425975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.426219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.426234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.426357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.426368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.426542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.426552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.426674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.426685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.426945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.426955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.552 qpair failed and we were unable to recover it. 00:26:58.552 [2024-07-15 22:43:22.427127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.552 [2024-07-15 22:43:22.427137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.427327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.427338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.427524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.427535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.427639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.427649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.427833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.427844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.427966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.427976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.428089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.428099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.428241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.428252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.428500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.428511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.428685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.428695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.428792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.428803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.428989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.428999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.429174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.429185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.429405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.429416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.429554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.429564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.429810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.429822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.430003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.430014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.430167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.430177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.430356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.430367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.430494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.430504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.430697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.430708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.430830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.430840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.431031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.431041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.431176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.431186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.431301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.431312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.431444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.431455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.431720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.431731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.431861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.431872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-15 22:43:22.431986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.553 [2024-07-15 22:43:22.431997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.432219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.432241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.432386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.432397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.432580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.432591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.432782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.432793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.433011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.433021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.433157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.433167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.433383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.433394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.433615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.433625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.433878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.433889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.434129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.434140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.434274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.434285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.434482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.434493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.434613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.434623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.434837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.434866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.435005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.435020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.435319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.435335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.435541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.435555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.435773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.435787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.435932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.435946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.436149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.436163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.436431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.436446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.436730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.436744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.436944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.436960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.437258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.437275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.437469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.437487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.437676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.554 [2024-07-15 22:43:22.437693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-15 22:43:22.437919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.437936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.438139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.438155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.438273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.438290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.438444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.438460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.438643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.438660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.438863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.438878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.439101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.439117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.439305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.439322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.439511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.439527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.439725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.439741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.439939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.439954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.440233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.440249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.440547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.440565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.440703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.440720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.440860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.440879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.441070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.441086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.441223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.441243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.441441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.441457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.441643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.441659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.441791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.441805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.442006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.442023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.442207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.442223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.442346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.442361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.442486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.442501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.442652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.442668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.442796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.442812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-15 22:43:22.443005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.555 [2024-07-15 22:43:22.443021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.443215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.443236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.443432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.443450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.443667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.443683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.443888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.443903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.444085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.444101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.444296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.444313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.444528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.444545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.444739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.444754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.444903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.444919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.445114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.445130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.445409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.445425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.445614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.445630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.445926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.445941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.446142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.446158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.446358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.446377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.446581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.446596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.446811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.446825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.446954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.446968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.447170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.447185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.447303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.447318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.447515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.447529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.447660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.447674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.447857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.447871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.448065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.448079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.448269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.448284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.448482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.448496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.556 qpair failed and we were unable to recover it. 00:26:58.556 [2024-07-15 22:43:22.448744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.556 [2024-07-15 22:43:22.448758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.448940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.448955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.449209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.449223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.449312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.449326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.449577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.449592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.449903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.449917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.450060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.450073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.450195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.450209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.450406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.450420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.450670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.450684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.454488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.454505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.454750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.454763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.454915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.454928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.455069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.455083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.455310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.455324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.455593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.455606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.455757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.455771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.455975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.455989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.456112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.456126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.456279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.456293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.456582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.456596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.456779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.456793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.456943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.456957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.457154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.457168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.457349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.457363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.457590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.457604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.557 [2024-07-15 22:43:22.457748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.557 [2024-07-15 22:43:22.457762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.557 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.457978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.457992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.458198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.458215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.458458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.458491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.458667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.458696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.458855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.458866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.459105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.459116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.459364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.459374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.459453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.459463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.459641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.459651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.459745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.459754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.459893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.459903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.460089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.460099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.460299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.460310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.460494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.460504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.460702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.460712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.460914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.460927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.461211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.461221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.558 [2024-07-15 22:43:22.461385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.558 [2024-07-15 22:43:22.461395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.558 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.461624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.461636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.461774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.461785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.462053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.462064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.462188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.462198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.462400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.462411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.462602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.462612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.462903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.462913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.463053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.463064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.463229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.463239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.463354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.463364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.463500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.463511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.463713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.463723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.463860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.463870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.463996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.464006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.464135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.464145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.464296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.464307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.464450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.464460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.464577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.464588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.464707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.464717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.464849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.464860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.465059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.465069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.465257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.465268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.465366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.465376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.465598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.465608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.465724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.465734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.465920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.465930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.466121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.466132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.466356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.466366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.466552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.466562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.466639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.466648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.466763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.466773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.466962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.466972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.467157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.467167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.467344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.467354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.467483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.467492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.467668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.467678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.855 qpair failed and we were unable to recover it. 00:26:58.855 [2024-07-15 22:43:22.467811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.855 [2024-07-15 22:43:22.467821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.467941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.467954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.468077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.468087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.468196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.468206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.468292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.468302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.468486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.468496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.468612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.468621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.468781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.468791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.468967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.468977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.469164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.469173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.469261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.469271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.469497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.469506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.469620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.469630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.469761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.469771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.469892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.469902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.470122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.470132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.470223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.470237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.470352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.470362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.470503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.470512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.470628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.470637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.470833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.470843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.471046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.471055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.471237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.471247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.471368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.471378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.471569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.471578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.471847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.471857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.472042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.472054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.472183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.472193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.472445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.472456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.472647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.472657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.472776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.472786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.472927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.472936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.473124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.473134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.473272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.473283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.473470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.473480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.473691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.473700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.473831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.473841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.473979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.473990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.474176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.474187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.474256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.474266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.856 [2024-07-15 22:43:22.474481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.856 [2024-07-15 22:43:22.474494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.856 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.474679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.474696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.474810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.474821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.474940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.474951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.475073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.475084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.475276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.475288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.475493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.475503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.475636] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.857 [2024-07-15 22:43:22.475664] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.857 [2024-07-15 22:43:22.475671] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.857 [2024-07-15 22:43:22.475677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.857 [2024-07-15 22:43:22.475677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.475683] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.857 [2024-07-15 22:43:22.475688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.475873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.475884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 [2024-07-15 22:43:22.475802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.475910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:58.857 [2024-07-15 22:43:22.476022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:58.857 [2024-07-15 22:43:22.476086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.476098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.476023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:58.857 [2024-07-15 22:43:22.476273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.476284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.476407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.476417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.476596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.476607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.476874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.476884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.477074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.477085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.477232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.477242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.477455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.477465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.477729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.477740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.477920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.477930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.478044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.478055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.478193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.478203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.478450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.478461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.478675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.478685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.478809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.478819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.479022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.479032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.479237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.479248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.479432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.479442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.479684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.479694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.479912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.479923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.480120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.480130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.480386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.480397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.480533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.480543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.480735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.480745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.480891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.480901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.481022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.481032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.481160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.481171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.481296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.481307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.481482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.857 [2024-07-15 22:43:22.481493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.857 qpair failed and we were unable to recover it. 00:26:58.857 [2024-07-15 22:43:22.481673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.481685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.481891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.481901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.482009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.482018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.482263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.482273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.482390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.482400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.482595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.482604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.482731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.482742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.482919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.482929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.483045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.483055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.483240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.483251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.483374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.483384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.483567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.483577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.483823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.483834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.484025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.484036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.484243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.484254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.484381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.484391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.484520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.484531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.484706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.484717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.484905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.484915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.485029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.485039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.485214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.485228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.485469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.485479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.485690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.485700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.485989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.485999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.486249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.486260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.486386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.486397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.486589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.486599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.486790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.486800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.486990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.487001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.487112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.487123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.487245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.487255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.487396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.487407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.487599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.487610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.487689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.487699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.487902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.487914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.488106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.488116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.488361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.488371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.488510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.488521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.488791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.488802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.488980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.488990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.858 [2024-07-15 22:43:22.489121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.858 [2024-07-15 22:43:22.489135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.858 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.489248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.489258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.489402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.489411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.489532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.489543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.489671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.489680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.489810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.489820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.489983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.489994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.490184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.490196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.490336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.490347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.490591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.490602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.490781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.490792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.491034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.491045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.491232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.491244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.491387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.491397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.491585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.491596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.491789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.491801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.492021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.492032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.492104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.492114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.492369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.492381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.492598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.492609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.492800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.492811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.493077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.493089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.493246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.493257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.493383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.493393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.493524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.493534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.493722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.493733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.493845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.493856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.494046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.494056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.494266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.494276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.494449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.494460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.494669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.494679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.494788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.494798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.494931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.494941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.495200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.859 [2024-07-15 22:43:22.495210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.859 qpair failed and we were unable to recover it. 00:26:58.859 [2024-07-15 22:43:22.495417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.495428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.495631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.495641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.495780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.495791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.495969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.495980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.496172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.496182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.496421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.496433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.496546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.496558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.496678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.496688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.496797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.496808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.496986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.496997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.497201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.497211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.497421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.497432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.497563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.497573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.497763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.497773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.498070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.498082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.498220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.498241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.498434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.498444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.498619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.498630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.498856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.498866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.499155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.499167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.499383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.499394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.499570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.499580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.499715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.499725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.499973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.499985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.500230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.500241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.500510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.500520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.500698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.500708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.500891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.500902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.501092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.501103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.501284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.501295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.501481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.501491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.501647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.501657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.501852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.501863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.502076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.502106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.502359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.502376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.502696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.502711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.502895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.502908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.503180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.503194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.503449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.503464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.503734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.503747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.860 [2024-07-15 22:43:22.503881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.860 [2024-07-15 22:43:22.503895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.860 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.504172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.504186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.504327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.504341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.504539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.504553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.504835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.504849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.505023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.505037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.505312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.505327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.505635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.505653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.505790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.505805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.505949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.505963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.506192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.506207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.506399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.506414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.506634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.506649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.506935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.506950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.507100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.507115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.507308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.507323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.507469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.507485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.507614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.507629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.507760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.507777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.507969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.507987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.508193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.508215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.508394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.508413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.508604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.508623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.508886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.508910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.509034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.509047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.509179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.509191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.509384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.509395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.509581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.509592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.509810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.509820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.510025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.510034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.510157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.510167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.510412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.510425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.510617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.510628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.510763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.510773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.510949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.510959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.511095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.511104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.511300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.511310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.511437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.511448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.511634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.511644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.511820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.511830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.512030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.512040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.861 [2024-07-15 22:43:22.512220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.861 [2024-07-15 22:43:22.512234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.861 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.512373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.512383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.512468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.512477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.512606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.512615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.512883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.512893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.513082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.513092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.513365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.513376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.513588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.513597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.513839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.513849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.514030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.514039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.514237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.514246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.514438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.514449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.514659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.514668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.514910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.514919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.515131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.515141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.515271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.515281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.515458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.515469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.515669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.515679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.515923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.515935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.516154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.516170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.516361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.516373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.516620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.516631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.516806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.516816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.516951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.516961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.517174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.517186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.517318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.517330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.517509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.517521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.517698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.517708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.517802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.517813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.518035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.518047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.518166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.518176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.518361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.518373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.518586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.518597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.518711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.518722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.518986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.518996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.519123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.519134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.519374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.519387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.519566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.519576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.519770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.519781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.520025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.520036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.520220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.520236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.862 [2024-07-15 22:43:22.520427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.862 [2024-07-15 22:43:22.520439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.862 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.520556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.520566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.520744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.520754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.520885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.520895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.521102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.521113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.521305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.521317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.521565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.521575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.521778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.521788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.522046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.522056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.522250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.522262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.522513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.522524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.522650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.522660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.522800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.522811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.522989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.522999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.523178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.523189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.523326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.523337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.523523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.523533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.523717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.523727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.523928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.523942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.524073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.524084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.524268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.524278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.524460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.524471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.524601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.524611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.524786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.524796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.524990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.525001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.525118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.525128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.525301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.525312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.525508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.525519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.525708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.525718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.525897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.525908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.526096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.526106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.526311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.526322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.526514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.526525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.526657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.526667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.526927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.526937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.863 qpair failed and we were unable to recover it. 00:26:58.863 [2024-07-15 22:43:22.527128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.863 [2024-07-15 22:43:22.527138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.527310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.527321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.527442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.527452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.527557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.527567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.527681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.527691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.527802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.527812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.528001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.528012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.528254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.528266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.528513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.528525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.528698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.528708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.528834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.528845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.528967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.528977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.529131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.529140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.529290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.529301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.529435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.529445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.529586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.529596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.529787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.529797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.530040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.530050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.530173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.530183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.530361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.530372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.530548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.530558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.530749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.530758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.530899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.530909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.531041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.531053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.531264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.531274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.531468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.531478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.531680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.531690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.531813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.531822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.532016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.532026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.532267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.532277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.532480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.532489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.532687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.532697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.532831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.532840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.532983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.532993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.533240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.533251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.533454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.533463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.533656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.533666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.533856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.533865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.533991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.534001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.534268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.534278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.534523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.864 [2024-07-15 22:43:22.534533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.864 qpair failed and we were unable to recover it. 00:26:58.864 [2024-07-15 22:43:22.534723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.534732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.534826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.534835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.534956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.534966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.535208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.535217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.535406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.535416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.535565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.535575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.535808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.535817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.535933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.535942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.536149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.536159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.536305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.536314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.536403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.536412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.536588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.536598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.536773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.536782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.536923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.536932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.537110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.537120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.537306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.537316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.537513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.537522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.537701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.537711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.537894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.537905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.538080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.538090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.538278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.538288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.538409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.538418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.538620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.538633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.538836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.538845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.539056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.539066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.539243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.539253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.539375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.539384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.539575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.539585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.539818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.539828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.540024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.540034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.540152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.540161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.540404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.540414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.540686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.540695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.540870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.540880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.541069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.541079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.541257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.541268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.541403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.541413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.541675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.541685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.541877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.541887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.542083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.542093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.542273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.865 [2024-07-15 22:43:22.542283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.865 qpair failed and we were unable to recover it. 00:26:58.865 [2024-07-15 22:43:22.542487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.542497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.542686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.542696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.542958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.542967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.543240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.543250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.543392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.543402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.543670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.543680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.543862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.543871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.544049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.544059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.544183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.544193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.544447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.544456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.544642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.544651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.544785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.544795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.544918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.544928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.545065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.545074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.545271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.545281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.545412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.545421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.545548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.545558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.545681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.545690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.545815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.545825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.546029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.546039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.546211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.546220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.546402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.546415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.546570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.546580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.546708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.546717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.546843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.546853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.546991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.547001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.547192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.547202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.547387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.547397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.547579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.547589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.547835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.547845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.548046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.548056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.548299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.548308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.548486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.548496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.548687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.548696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.548879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.548889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.549062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.549071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.549206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.549217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.549419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.549429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.549612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.549621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.549812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.549822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.550086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.866 [2024-07-15 22:43:22.550096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.866 qpair failed and we were unable to recover it. 00:26:58.866 [2024-07-15 22:43:22.550215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.550234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.550345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.550355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.550533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.550543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.550666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.550676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.550919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.550928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.551125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.551134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.551378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.551389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.551640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.551650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.551773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.551782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.551902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.551911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.552153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.552163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.552340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.552364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.552482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.552491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.552669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.552679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.552866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.552876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.553060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.553069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.553261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.553271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.553442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.553452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.553718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.553727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.553906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.553916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.554158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.554169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.554370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.554380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.554575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.554585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.554782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.554792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.554909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.554919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.555110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.555121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.555295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.555305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.555497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.555506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.555685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.555696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.555805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.555815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.555955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.555964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.556233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.556243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.556367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.556377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.556485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.556495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.556679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.867 [2024-07-15 22:43:22.556689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.867 qpair failed and we were unable to recover it. 00:26:58.867 [2024-07-15 22:43:22.556982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.556991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.557188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.557197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.557421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.557431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.557550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.557559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.557768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.557778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.557864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.557874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.558122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.558132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.558310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.558320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.558544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.558554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.558649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.558658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.558920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.558930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.559177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.559187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.559473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.559503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.559721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.559736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.560028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.560042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.560252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.560267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.560463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.560476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.560739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.560753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.560947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.560961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.561217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.561236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.561369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.561383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.561519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.561532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.561805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.561818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.562121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.562135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.562338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.562352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.562483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.562496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.562759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.562773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.562979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.562993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.563190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.563203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.563417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.563431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.563559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.563573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.563786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.563799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.564050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.564064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.564159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.564171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.564351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.564361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.564580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.564590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.564779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.564790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.565033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.565042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.565162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.565172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.565393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.565408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.565546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.868 [2024-07-15 22:43:22.565560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.868 qpair failed and we were unable to recover it. 00:26:58.868 [2024-07-15 22:43:22.565810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.565823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.566076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.566090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.566222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.566242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.566337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.566351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.566489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.566502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.566684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.566698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.566991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.567006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.567209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.567223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.567438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.567451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.567700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.567714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.567920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.567934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.568115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.568129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.568276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.568290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.568560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.568574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.568768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.568782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.568930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.568944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.569090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.569104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.569309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.569323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.569454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.569468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.569662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.569676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.569960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.569974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.570223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.570241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.570435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.570449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.570652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.570666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.570807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.570821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.570955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.570971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.571153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.571166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.571315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.571329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.571522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.571535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.571808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.571821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.572014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.572027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.572307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.572321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.572515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.572528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.572630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.572644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.572840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.572854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.573163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.573176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.573378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.573392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.573525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.573538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.573854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.573867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.574016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.574030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.574282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.869 [2024-07-15 22:43:22.574296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.869 qpair failed and we were unable to recover it. 00:26:58.869 [2024-07-15 22:43:22.574542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.574555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.574763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.574777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.575027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.575041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.575303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.575316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.575510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.575524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.575781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.575795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.575975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.575988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.576179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.576193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.576331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.576345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.576594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.576607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.576925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.576939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.577030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.577049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.577176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.577189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.577388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.577403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.577535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.577548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.577748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.577762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.577963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.577977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.578162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.578175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.578381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.578394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.578544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.578559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.578690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.578703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.578901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.578914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.579198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.579212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.579358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.579370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.579497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.579507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.579703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.579712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.579912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.579922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.580047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.580056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.580324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.580335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.580555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.580564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.580758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.580767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.581010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.581020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.581131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.581141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.581320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.581330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.581594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.581604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.581737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.581746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.581857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.581867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.581978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.581988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.582161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.582173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.582368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.582378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.870 qpair failed and we were unable to recover it. 00:26:58.870 [2024-07-15 22:43:22.582645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.870 [2024-07-15 22:43:22.582655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.582779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.582788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.582910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.582921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.583057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.583067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.583252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.583262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.583458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.583467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.583541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.583551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.583659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.583669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.583801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.583811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.583933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.583944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.584126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.584136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.584270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.584280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.584525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.584536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.584657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.584667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.584780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.584789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.585010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.585021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.585223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.585237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.585428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.585438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.585554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.585565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.585687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.585697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.585957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.585967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.586093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.586102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.586355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.586366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.586494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.586503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.586774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.586784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.586944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.586954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.587192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.587201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.587393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.587403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.587530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.587540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.587650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.587660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.587845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.587855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.587990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.587999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.588177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.588187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.588478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.588489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.588606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.588615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.588736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.588746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.588931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.588942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.589151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.589160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.589339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.589351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.589495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.589505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.589618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.871 [2024-07-15 22:43:22.589628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.871 qpair failed and we were unable to recover it. 00:26:58.871 [2024-07-15 22:43:22.589823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.589832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.589910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.589919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.590045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.590055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.590297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.590307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.590433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.590443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.590623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.590633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.590814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.590823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.591005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.591015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.591270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.591280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.591488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.591498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.591682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.591691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.591813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.591823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.592007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.592017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.592269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.592279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.592456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.592465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.592704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.592714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.592833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.592843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.592976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.592987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.593171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.593181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.593365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.593375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.593514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.593525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.593715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.593725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.593848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.593858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.594046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.594056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.594270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.594279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.594400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.594410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.594595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.594605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.594745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.594754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.594953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.594963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.595153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.595162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.595423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.595434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.595680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.595690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.595880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.595890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.596070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.596079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.596324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.596335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.872 qpair failed and we were unable to recover it. 00:26:58.872 [2024-07-15 22:43:22.596517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.872 [2024-07-15 22:43:22.596527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.596701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.596711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.596904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.596916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.597113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.597123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.597320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.597330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.597517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.597527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.597658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.597668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.597754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.597763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.597958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.597969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.598144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.598153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.598262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.598272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.598410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.598419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.598618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.598628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.598808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.598818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.599057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.599067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.599296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.599306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.599566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.599576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.599698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.599708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.599848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.599859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.600030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.600039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.600170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.600180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.600258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.600268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.600477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.600486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.600679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.600689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.600800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.600810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.600976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.600985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.601112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.601123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.601299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.601308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.601414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.601423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.601613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.601623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.601866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.601876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.602029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.602039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.602171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.602181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.602424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.602434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.602569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.602579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.602765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.602774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.602899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.602910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.603097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.603107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.603325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.603335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.603456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.603466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.603604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.873 [2024-07-15 22:43:22.603614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.873 qpair failed and we were unable to recover it. 00:26:58.873 [2024-07-15 22:43:22.603869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.603880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.604082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.604094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.604220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.604241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.604441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.604451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.604557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.604567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.604698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.604707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.604895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.604904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.605042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.605052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.605234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.605244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.605450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.605460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.605675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.605685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.605870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.605880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.606128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.606138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.606265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.606276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.606397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.606406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.606655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.606666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.606950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.606959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.607096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.607106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.607228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.607238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.607310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.607320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.607438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.607447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.607623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.607634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.607886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.607896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.608141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.608150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.608346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.608356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.608476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.608485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.608581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.608590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.608784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.608794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.609053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.609063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.609187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.609197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.609329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.609339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.609520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.609531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.609783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.609792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.610001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.610011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.610201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.610210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.610383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.610393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.610665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.610675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.610867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.610876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.611050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.611059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.611206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.611216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.611419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.874 [2024-07-15 22:43:22.611429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.874 qpair failed and we were unable to recover it. 00:26:58.874 [2024-07-15 22:43:22.611563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.611574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.611795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.611805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.612026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.612035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.612213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.612222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.612375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.612385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.612561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.612571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.612751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.612760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.612949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.612959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.613069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.613078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.613203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.613212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.613332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.613342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.613527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.613536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.613784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.613794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.613980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.613991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.614241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.614252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.614449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.614459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.614724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.614734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.614932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.614942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.615178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.615187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.615433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.615443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.615566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.615576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.615766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.615776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.615964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.615974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.616215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.616228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.616334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.616343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.616477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.616487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.616667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.616677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.616889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.616898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.617141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.617151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.617368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.617383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.617644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.617653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.617770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.617780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.617872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.617881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.618052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.618062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.618198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.618208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.618340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.618350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.618473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.618483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.618567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.618577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.618704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.618713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.618910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.618920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.875 [2024-07-15 22:43:22.619038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.875 [2024-07-15 22:43:22.619049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.875 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.619177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.619186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.619376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.619387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.619504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.619513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.619631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.619642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.619746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.619755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.619857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.619867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.620131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.620140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.620314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.620324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.620457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.620466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.620655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.620664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.620852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.620861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.620987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.620997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.621172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.621182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.621293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.621303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.621439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.621449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.621634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.621643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.621812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.621821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.622015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.622025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.622212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.622221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.622419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.622429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.622540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.622549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.622725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.622735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.622919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.622928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.623120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.623130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.623240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.623249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.623442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.623452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.623710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.623720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.623891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.623900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.624100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.624110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.624285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.624296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.624491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.624500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.624692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.624702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.624827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.624836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.625040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.625050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.876 qpair failed and we were unable to recover it. 00:26:58.876 [2024-07-15 22:43:22.625263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.876 [2024-07-15 22:43:22.625273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.625407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.625417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.625531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.625541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.625721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.625731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.625916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.625925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.626130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.626142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.626276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.626286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.626479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.626488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.626736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.626745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.626920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.626929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.627042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.627052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.627171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.627180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.627362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.627373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.627564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.627574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.627823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.627832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.627965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.627974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.628136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.628147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.628283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.628293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.628430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.628439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.628618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.628628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.628830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.628840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.628957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.628967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.629086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.629097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.629285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.629295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.629418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.629428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.629601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.629611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.629849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.629858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.629991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.630000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.630188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.630199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.630406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.630416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.630658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.630668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.630850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.630859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.631134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.631145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.631265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.631275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.631538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.631548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.631686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.631696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.631805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.631814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.632069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.632079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.632272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.632281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.632525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.632534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.632674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.877 [2024-07-15 22:43:22.632684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.877 qpair failed and we were unable to recover it. 00:26:58.877 [2024-07-15 22:43:22.632866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.632876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.633056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.633066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.633275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.633286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.633560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.633569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.633681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.633692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.633907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.633917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.634165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.634174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.634436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.634446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.634697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.634707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.634910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.634920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.635120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.635129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.635392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.635402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.635511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.635520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.635655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.635665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.635855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.635865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.635969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.635978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.636232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.636243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.636443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.636453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.636638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.636648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.636842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.636851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.637091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.637100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.637242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.637252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.637453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.637463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.637576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.637586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.637829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.637840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.637973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.637982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.638080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.638090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.638282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.638293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.638481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.638490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.638627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.638637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.638879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.638889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.639094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.639122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.639381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.639397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.639545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.639559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.639743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.639756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.639951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.639965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.640150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.640164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.640364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.640378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.640578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.640592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.640796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.640809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.641005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.878 [2024-07-15 22:43:22.641018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.878 qpair failed and we were unable to recover it. 00:26:58.878 [2024-07-15 22:43:22.641172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.641186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.641321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.641335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.641528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.641541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.641726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.641744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.642018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.642031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.642254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.642267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.642389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.642403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.642653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.642666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.642792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.642805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.642998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.643013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.643197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.643211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.643485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.643496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.643701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.643711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.643828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.643837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.644030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.644039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.644163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.644173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.644417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.644428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.644646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.644656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.644856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.644866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.644998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.645007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.645148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.645158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.645435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.645446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.645712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.645722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.645843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.645853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.646034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.646043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.646161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.646171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.646284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.646293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.646416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.646426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.646535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.646544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.646727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.646737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.646958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.646968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.647103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.647112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.647227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.647238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.647365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.647374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.647573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.647583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.647691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.647701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.647888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.647898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.648139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.648148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.648388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.648399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.648527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.648537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.879 [2024-07-15 22:43:22.648804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.879 [2024-07-15 22:43:22.648813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.879 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.648932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.648942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.649135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.649144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.649276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.649290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.649426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.649435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.649618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.649628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.649808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.649817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.650058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.650067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.650190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.650200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.650444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.650455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.650640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.650650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.650789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.650799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.650932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.650941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.651186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.651196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.651320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.651331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.651601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.651611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.651789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.651799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.651988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.651998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.652120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.652130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.652420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.652430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.652691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.652701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.652838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.652847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.652983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.652992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.653183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.653193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.653383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.653393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.653526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.653535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.653651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.653662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.653783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.653793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.653929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.653939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.654123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.654133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.880 qpair failed and we were unable to recover it. 00:26:58.880 [2024-07-15 22:43:22.654258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.880 [2024-07-15 22:43:22.654268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.654508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.654518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.654631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.654642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.654813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.654822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.655017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.655026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.655138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.655148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.655414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.655425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.655544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.655553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.655798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.655809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.655919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.655929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.656100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.656110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.656237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.656247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.656512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.656522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.656700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.656711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.656810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.656820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.656936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.656946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.657237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.657247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.657443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.657453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.657695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.657705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.657889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.657898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.658093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.658103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.658368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.658378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.658556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.658566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.658746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.658756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.658940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.658949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.659212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.659222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.659426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.659436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.659613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.659623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.659833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.659843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.660025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.660034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.660276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.660286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.660404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.660414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.660608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.660618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.660758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.660767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.661021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.661030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.661207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.661217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.661483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.661493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.661625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.661634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.661817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.661827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.662001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.881 [2024-07-15 22:43:22.662010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.881 qpair failed and we were unable to recover it. 00:26:58.881 [2024-07-15 22:43:22.662124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.662135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.662266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.662276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.662401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.662412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.662676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.662686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.662803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.662813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.662988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.662997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.663206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.663216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.663533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.663543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.663718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.663728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.663866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.663876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.664054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.664064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.664182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.664192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.664366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.664376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.664495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.664505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.664695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.664705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.664826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.664835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.665025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.665035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.665167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.665177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.665283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.665293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.665419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.665430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.665699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.665709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.665823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.665833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.666010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.666021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.666144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.666153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.666244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.666255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.666442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.666452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.666561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.666571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.666783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.666793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.666914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.666923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.666990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.667001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.667123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.667132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.667334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.667345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.667501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.667511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.667678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.667687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.667877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.667888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.668058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.668067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.668257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.668267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.668458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.668468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.668652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.668662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.668941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.668950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.669140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.882 [2024-07-15 22:43:22.669152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.882 qpair failed and we were unable to recover it. 00:26:58.882 [2024-07-15 22:43:22.669231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.669241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.669414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.669424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.669599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.669609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.669848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.669858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.669966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.669975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.670233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.670243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.670485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.670495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.670736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.670746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.670924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.670934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.671185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.671194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.671320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.671330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.671451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.671461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.671636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.671646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.671836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.671846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.671957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.671967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.672157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.672166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.672366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.672376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.672512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.672522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.672697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.672707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.672831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.672840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.672951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.672962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.673203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.673212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.673392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.673402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.673646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.673656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.673843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.673852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.674116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.674126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.674257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.674267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.674392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.674401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.674518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.674528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.674721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.674731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.674838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.674847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.675037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.675047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.675246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.675256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.675363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.675372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.675591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.675601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.675799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.675808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.676065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.676074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.676266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.676277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.676400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.676410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.676605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.883 [2024-07-15 22:43:22.676616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.883 qpair failed and we were unable to recover it. 00:26:58.883 [2024-07-15 22:43:22.676883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.676892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.677027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.677036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.677154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.677164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.677325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.677335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.677459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.677470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.677607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.677616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.677799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.677808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.678071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.678081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.678351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.678361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.678563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.678572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.678754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.678764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.679001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.679010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.679203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.679213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.679428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.679438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.679709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.679718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.679912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.679921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.680111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.680121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.680244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.680254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.680503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.680512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.680645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.680655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.680771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.680781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.680977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.680987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.681231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.681241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.681428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.681437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.681622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.681632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.681895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.681906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.682016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.682026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.682210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.682219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.682407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.682417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.682620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.682629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.682825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.682834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.683011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.683021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.683195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.683205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.683394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.683405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.683532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.683542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.683728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.683737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.683945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.683955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.684146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.684155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.684329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.684340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.684480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.684491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.884 [2024-07-15 22:43:22.684753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.884 [2024-07-15 22:43:22.684763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.884 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.684881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.684891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.685097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.685106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.685370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.685380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.685509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.685518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.685783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.685793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.685931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.685941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.686205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.686214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.686338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.686348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.686525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.686535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.686728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.686738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.686947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.686956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.687131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.687140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.687335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.687345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.687466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.687476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.687650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.687660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.687843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.687853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.688060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.688070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.688162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.688172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.688295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.688305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.688512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.688522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.688812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.688822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.688950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.688960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.689159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.689169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.689375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.689386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.689564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.689574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.689768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.689778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.689904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.689913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.690104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.690114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.690292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.690302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.690493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.690503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.690704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.690713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.690888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.690898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.691039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.691049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.691169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.691178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.885 [2024-07-15 22:43:22.691368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.885 [2024-07-15 22:43:22.691379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.885 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.691512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.691521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.691694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.691703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.691891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.691902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.692021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.692033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.692325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.692335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.692520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.692529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.692732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.692743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.693007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.693016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.693200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.693209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.693343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.693353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.693538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.693548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.693661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.693670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.693847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.693857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.694120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.694129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.694267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.694277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.694465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.694475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.694599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.694608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.694745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.694756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.695020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.695030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.695173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.695183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.695378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.695389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.695514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.695525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.695791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.695800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.695987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.695996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.696120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.696131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.696257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.696267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.696463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.696473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.696601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.696612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.696807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.696817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.697011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.697020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.697240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.697251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.697381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.697390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.697499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.697508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.697618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.697628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.697805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.697815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.698010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.698020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.698217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.698231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.698420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.698430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.698563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.698572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.698785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.698796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.886 [2024-07-15 22:43:22.698952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.886 [2024-07-15 22:43:22.698962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.886 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.699083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.699092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.699239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.699250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.699359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.699370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.699655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.699665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.699855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.699864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.699985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.699995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.700129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.700138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.700265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.700275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.700397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.700407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.700524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.700533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.700717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.700728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.700834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.700843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.701034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.701044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.701171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.701181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.701363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.701373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.701637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.701647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.701891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.701901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.702013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.702022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.702262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.702272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.702525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.702535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.702649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.702658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.702927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.702937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.703121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.703131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.703394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.703404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.703650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.703659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.703787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.703797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.703975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.703985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.704250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.704261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.704455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.704465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.704654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.704664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.704787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.704796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.704991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.705001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.705129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.705139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.705253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.705264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.705468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.705478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.705744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.705754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.705927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.705938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.706113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.706123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.706295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.706305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.706505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.706515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.887 qpair failed and we were unable to recover it. 00:26:58.887 [2024-07-15 22:43:22.706786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.887 [2024-07-15 22:43:22.706796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.706932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.706942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.707080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.707091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.707174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.707184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.707357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.707367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.707554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.707563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.707742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.707753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.707874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.707883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.708005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.708015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.708283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.708294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.708422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.708432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.708549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.708558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.708767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.708777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.709038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.709047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.709282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.709291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.709534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.709544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.709683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.709693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.709822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.709831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.709945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.709955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.710133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.710142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.710274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.710284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.710461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.710470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.710712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.710721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.710917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.710927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.711109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.711119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.711311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.711321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.711544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.711554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.711744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.711754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.711949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.711959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.712093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.712103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.712347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.712357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.712536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.712545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.712678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.712687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.712880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.712890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.713075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.713084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.713221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.713233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.713417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.713427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.713618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.713628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.713768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.713777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.713962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.713971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.714147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.714156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.888 qpair failed and we were unable to recover it. 00:26:58.888 [2024-07-15 22:43:22.714363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.888 [2024-07-15 22:43:22.714373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.714580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.714592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.714730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.714739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.714924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.714933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.715127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.715137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.715317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.715327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.715532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.715542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.715729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.715740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.715880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.715890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.716064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.716074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.716190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.716199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.716381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.716391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.716519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.716528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.716660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.716670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.716858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.716867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.717113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.717122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.717369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.717380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.717582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.717592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.717697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.717707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.717826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.717836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.717964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.717974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.718261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.718271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.718446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.718456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.718643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.718653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.718856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.718866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.719056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.719066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.719330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.719340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.719514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.719523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.719783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.719793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.720064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.720073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.720269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.720279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.720413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.720422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.720665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.720675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.720862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.720872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.721066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.721076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.889 [2024-07-15 22:43:22.721209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.889 [2024-07-15 22:43:22.721219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.889 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.721397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.721406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.721584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.721593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.721719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.721729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.721864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.721874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.722138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.722147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.722379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.722391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.722583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.722593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.722842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.722852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.723047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.723057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.723300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.723310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.723507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.723517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.723641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.723650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.723831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.723840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.724128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.724138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.724380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.724390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.724647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.724657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.724795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.724804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.725070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.725079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.725254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.725264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.725532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.725542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.725715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.725725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.725845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.725854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.726122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.726132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.726259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.726269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.726454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.726463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.726588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.726598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.726784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.726793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.726903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.726912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.727106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.727116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.727334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.727344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.727534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.727543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.727697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.727706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.727844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.727854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.728049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.728059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.728253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.728264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.728524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.728534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.728659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.728668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.728804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.728815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.729029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.729039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.729157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.890 [2024-07-15 22:43:22.729166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.890 qpair failed and we were unable to recover it. 00:26:58.890 [2024-07-15 22:43:22.729296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.729307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.729520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.729529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.729707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.729716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.729836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.729846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.730003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.730012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.730136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.730148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.730355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.730365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.730470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.730479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.730749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.730758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.730875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.730885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.731067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.731077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.731257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.731268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.731379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.731388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.731646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.731655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.731897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.731906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.732098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.732107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.732285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.732295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.732504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.732513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.732638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.732648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.732762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.732772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.733057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.733067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.733255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.733266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.733398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.733407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.733587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.733597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.733809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.733819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.733992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.734002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.734189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.734198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.734330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.734340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.734579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.734588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.734878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.734888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.735018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.735028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.735155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.735164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.735439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.735457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.735666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.735679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.735881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.735895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.736105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.736118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.736270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.736284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.736474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.736488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.736736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.736750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.737048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.737062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.737258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.891 [2024-07-15 22:43:22.737272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.891 qpair failed and we were unable to recover it. 00:26:58.891 [2024-07-15 22:43:22.737456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.737470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.737596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.737609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.737829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.737843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.738034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.738044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.738238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.738250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.738425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.738435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.738563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.738572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.738853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.738863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.738984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.738993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.739139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.739149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.739277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.739287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.739411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.739420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.739594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.739604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.739793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.739803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.739914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.739923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.740134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.740143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.740383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.740394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.740528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.740537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.740806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.740816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.741030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.741039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.741295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.741305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.741497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.741507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.741695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.741705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.741834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.741844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.742096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.742105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.742308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.742318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.742509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.742519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.742646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.742655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.742924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.742934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.743051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.743061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.743321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.743331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.743457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.743466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.743656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.743665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.743797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.743806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.743924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.743934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.744216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.744229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.744359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.744369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.744551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.744561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.744776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.744785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.744967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.744977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.745114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.892 [2024-07-15 22:43:22.745124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.892 qpair failed and we were unable to recover it. 00:26:58.892 [2024-07-15 22:43:22.745316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.745326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.745568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.745578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.745650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.745659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.745921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.745933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.746110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.746119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.746312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.746322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.746528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.746538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.746823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.746833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.747011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.747020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.747315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.747325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.747459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.747468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.747663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.747673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.747913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.747923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.748109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.748119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.748340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.748350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.748483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.748493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.748707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.748719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.748859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.748869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.749084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.749094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.749283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.749293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.749418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.749428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.749671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.749681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.749934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.749945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.750128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.750138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.750387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.750397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.750573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.750583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.750848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.750858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.751123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.751133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.751329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.751339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.751425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.751435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.751700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.751709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.751844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.751853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.751991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.752001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.752175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.752185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.752400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.893 [2024-07-15 22:43:22.752410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.893 qpair failed and we were unable to recover it. 00:26:58.893 [2024-07-15 22:43:22.752527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.752537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.752725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.752734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.753017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.753027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.753281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.753291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.753474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.753484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.753657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.753667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.753855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.753864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.754052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.754062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.754323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.754335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.754532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.754542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.754672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.754682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.754863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.754872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.755083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.755093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.755317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.755327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.755444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.755454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.755587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.755597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.755774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.755784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.755980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.755989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.756179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.756189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.756365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.756375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.756495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.756505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.756680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.756690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.756886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.756895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.757145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.757155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.757341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.757351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.757526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.757536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.757722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.757731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.757895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.757905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.758031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.758040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.758220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.758234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.758500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.758510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.758623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.758633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.758737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.758747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.759014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.759024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.759263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.759273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.759458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.759468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.759592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.759602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.759745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.759755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.759961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.759971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.760164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.760174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.760413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.894 [2024-07-15 22:43:22.760428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.894 qpair failed and we were unable to recover it. 00:26:58.894 [2024-07-15 22:43:22.760552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.760562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.760688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.760697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.760800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.760810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.760955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.760964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.761205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.761215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.761360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.761384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.761518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.761533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.761722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.761736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.761952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.761966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.762160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.762173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.762390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.762405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.762655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.762669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.762915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.762928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.763132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.763145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.763417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.763433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.763633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.763646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.763792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.763806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.764003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.764016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.764162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.764175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.764426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.764439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.764742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.764756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.764985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.765000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.765136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.765150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.765347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.765361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.765612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.765626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.765784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.765797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.765941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.765955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.766084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.766097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.766298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.766312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.766509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.766522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.766702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.766715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.766917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.766930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.767059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.767073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.767223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.767241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.767437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.767454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.767671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.767685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.767815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.767828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.768042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.768055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.768172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.768185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.768372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.768386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.768526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.895 [2024-07-15 22:43:22.768539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.895 qpair failed and we were unable to recover it. 00:26:58.895 [2024-07-15 22:43:22.768669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.768682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.768964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.768977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.769168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.769181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.769305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.769319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.769570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.769584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.769731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.769744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.769953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.769966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.770101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.770115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.770350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.770363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.770612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.770625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.770872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.770885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.771085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.771098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.771281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.771295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.771476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.771489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.771626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.771639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.771837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.771850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.772049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.772063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.772264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.772278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.772398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.772411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.772515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.772528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.772679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.772693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.772892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.772906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.773024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.773038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.773296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.773309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.773495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.773508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.773717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.773731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.773847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.773860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.773954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.773968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.774108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.774121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.774268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.774282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.774464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.774477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.774734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.774747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.774895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.774909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.775032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.775048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.775181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.775194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.775394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.775408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.775591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.775604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.775785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.775799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.775983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.775996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.776178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.776191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.896 [2024-07-15 22:43:22.776453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.896 [2024-07-15 22:43:22.776467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.896 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.776608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.776621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.776742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.776755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.776950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.776963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.777159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.777172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.777447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.777461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.777737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.777750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.777894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.777907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.778104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.778117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.778336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.778350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.778602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.778616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.778750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.778763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.778961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.778974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.779264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.779277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.779429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.779442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.779568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.779581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.779781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.779795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.779916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.779929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.780045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.780058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.780244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.780258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.780456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.780467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.780668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.780678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.780793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.780802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.780997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.781007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.781201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.781211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.781329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.781339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.781534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.781543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.781715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.781725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.781865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.781875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.782059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.782068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.782200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.782209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.782342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.782352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.782542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.782552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.782737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.782749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.782954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.782964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.783158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.783168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.783298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.783308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.783527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.783537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.783666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.783675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.783915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.783925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.784228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.784238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.897 qpair failed and we were unable to recover it. 00:26:58.897 [2024-07-15 22:43:22.784369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.897 [2024-07-15 22:43:22.784378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.898 qpair failed and we were unable to recover it. 00:26:58.898 [2024-07-15 22:43:22.784568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.898 [2024-07-15 22:43:22.784579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.898 qpair failed and we were unable to recover it. 00:26:58.898 [2024-07-15 22:43:22.784766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.898 [2024-07-15 22:43:22.784776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.898 qpair failed and we were unable to recover it. 00:26:58.898 [2024-07-15 22:43:22.784891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.898 [2024-07-15 22:43:22.784900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.898 qpair failed and we were unable to recover it. 00:26:58.898 [2024-07-15 22:43:22.785153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.898 [2024-07-15 22:43:22.785163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.898 qpair failed and we were unable to recover it. 00:26:58.898 [2024-07-15 22:43:22.785357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.898 [2024-07-15 22:43:22.785368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.898 qpair failed and we were unable to recover it. 00:26:58.898 [2024-07-15 22:43:22.785559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.898 [2024-07-15 22:43:22.785569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.898 qpair failed and we were unable to recover it. 00:26:58.898 [2024-07-15 22:43:22.785837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.898 [2024-07-15 22:43:22.785847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.898 qpair failed and we were unable to recover it. 00:26:58.898 [2024-07-15 22:43:22.786096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.898 [2024-07-15 22:43:22.786105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.898 qpair failed and we were unable to recover it. 00:26:58.898 [2024-07-15 22:43:22.786234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.898 [2024-07-15 22:43:22.786244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.898 qpair failed and we were unable to recover it. 00:26:58.898 [2024-07-15 22:43:22.786373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.898 [2024-07-15 22:43:22.786383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:58.898 qpair failed and we were unable to recover it. 00:26:59.177 [2024-07-15 22:43:22.786645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.177 [2024-07-15 22:43:22.786657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.177 qpair failed and we were unable to recover it. 00:26:59.177 [2024-07-15 22:43:22.786846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.177 [2024-07-15 22:43:22.786857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.177 qpair failed and we were unable to recover it. 00:26:59.177 [2024-07-15 22:43:22.787043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.177 [2024-07-15 22:43:22.787053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.177 qpair failed and we were unable to recover it. 00:26:59.177 [2024-07-15 22:43:22.787261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.177 [2024-07-15 22:43:22.787271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.177 qpair failed and we were unable to recover it. 00:26:59.177 [2024-07-15 22:43:22.787473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.177 [2024-07-15 22:43:22.787482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.177 qpair failed and we were unable to recover it. 00:26:59.177 [2024-07-15 22:43:22.787657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.177 [2024-07-15 22:43:22.787667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.177 qpair failed and we were unable to recover it. 00:26:59.177 [2024-07-15 22:43:22.787842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.177 [2024-07-15 22:43:22.787852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.177 qpair failed and we were unable to recover it. 00:26:59.177 [2024-07-15 22:43:22.788039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.177 [2024-07-15 22:43:22.788048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.177 qpair failed and we were unable to recover it. 00:26:59.177 [2024-07-15 22:43:22.788230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.177 [2024-07-15 22:43:22.788242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.177 qpair failed and we were unable to recover it. 00:26:59.177 [2024-07-15 22:43:22.788368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.177 [2024-07-15 22:43:22.788378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.177 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.788521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.788531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.788740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.788749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.788870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.788880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.789144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.789153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.789265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.789275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.789398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.789408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.789601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.789610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.789780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.789789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.789976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.789987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.790174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.790183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.790302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.790312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.790611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.790620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.790862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.790871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.791066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.791076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.791201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.791211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.791349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.791359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.791482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.791492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.791705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.791715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.791915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.791926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.792104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.792113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.792306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.792316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.792545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.792554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.792646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.792656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.792775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.792785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.792914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.792924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.793065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.793074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.793255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.793265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.793401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.793411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.793603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.793612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.793796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.793805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.793995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.794005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.794140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.794150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.794277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.794287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.794501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.794511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.794638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.794648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.794910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.794920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.795105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.795114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.795196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.795205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.795330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.795343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.795606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.178 [2024-07-15 22:43:22.795616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.178 qpair failed and we were unable to recover it. 00:26:59.178 [2024-07-15 22:43:22.795731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.795741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.795930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.795940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.796234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.796244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.796371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.796381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.796643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.796652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.796798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.796808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.796993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.797003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.797123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.797132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.797372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.797382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.797622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.797631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.797808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.797818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.797945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.797954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.798149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.798159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.798443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.798453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.798592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.798601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.798788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.798797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.798975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.798985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.799113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.799123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.799308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.799319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.799431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.799441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.799682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.799692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.799816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.799825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.800019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.800029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.800211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.800221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.800349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.800358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.800546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.800555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.800734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.800743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.800933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.800942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.801190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.801199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.801440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.801450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.801629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.801639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.801778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.801789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.802057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.802066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.802195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.802205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.802394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.802404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.802643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.802652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.802835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.802844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.803037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.803047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.803242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.803255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.803331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.803341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.179 qpair failed and we were unable to recover it. 00:26:59.179 [2024-07-15 22:43:22.803602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.179 [2024-07-15 22:43:22.803612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.803803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.803813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.804015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.804025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.804242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.804252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.804436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.804446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.804618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.804628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.804822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.804832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.804962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.804972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.805211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.805221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.805359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.805369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.805615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.805625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.805891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.805901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.806048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.806057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.806175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.806184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.806307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.806317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.806504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.806514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.806706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.806716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.806958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.806968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.807100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.807109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.807294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.807305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.807486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.807496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.807740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.807750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.807932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.807942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.808116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.808126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.808342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.808353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.808618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.808628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.808811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.808820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.809007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.809017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.809219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.809233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.809439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.809449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.809626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.809636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.809875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.809885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.810007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.810016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.810141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.810151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.810393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.810402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.810578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.810588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.810675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.810685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.810929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.810939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.811149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.811161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.811362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.811372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.811561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.811571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.180 qpair failed and we were unable to recover it. 00:26:59.180 [2024-07-15 22:43:22.811775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.180 [2024-07-15 22:43:22.811784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.811981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.811991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.812150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.812159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.812364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.812374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.812503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.812513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.812725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.812734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.812996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.813006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.813149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.813159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.813337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.813347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.813548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.813558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.813735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.813745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.813859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.813870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.813997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.814006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.814201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.814210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.814421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.814431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.814552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.814562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.814692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.814703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.814810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.814820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.814993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.815002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.815245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.815255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.815393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.815403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.815578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.815588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.815731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.815740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.815945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.815955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.816198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.816208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.816408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.816417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.816678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.816688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.816877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.816887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.817019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.817029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.817163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.817172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.817384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.817394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.817635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.817645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.817862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.817872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.818046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.818056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.818258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.818268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.818535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.818544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.818651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.818660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.181 [2024-07-15 22:43:22.818798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.181 [2024-07-15 22:43:22.818809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.181 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.818931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.818940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.819058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.819067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.819185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.819194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.819321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.819331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.819466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.819476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.819673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.819683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.819800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.819810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.819937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.819947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.820079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.820089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.820205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.820215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.820339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.820349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.820488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.820497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.820709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.820718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.820851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.820860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.820980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.820989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.821111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.821121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.821299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.821309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.821491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.821501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.821676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.821685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.821822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.821831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.821955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.821965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.822173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.822183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.822427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.822437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.822701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.822710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.822901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.822910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.823097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.823107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.823363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.823373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.823621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.823630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.823824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.823833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.824013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.824022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.824230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.824240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.824490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.824500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.824715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.824724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.824857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.824866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.825127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.825136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.825261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.182 [2024-07-15 22:43:22.825272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.182 qpair failed and we were unable to recover it. 00:26:59.182 [2024-07-15 22:43:22.825451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.825461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.825588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.825597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.825780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.825789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.825924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.825935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.826062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.826072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.826196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.826206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.826380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.826390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.826573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.826582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.826760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.826769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.827032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.827042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.827219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.827233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.827356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.827366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.827565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.827575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.827698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.827708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.827889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.827899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.828161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.828171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.828365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.828375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.828502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.828512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.828713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.828723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.828899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.828908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.829097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.829106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.829235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.829245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.829432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.829441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.829618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.829628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.829825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.829835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.829953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.829962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.830135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.830145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.830410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.830420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.830529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.830539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.830734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.830744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.831018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.831028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.831171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.831181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.831309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.831320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.831463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.831472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.831714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.831724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.831840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.831850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.832060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.832069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.832264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.832274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.832416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.832426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.832674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.183 [2024-07-15 22:43:22.832684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.183 qpair failed and we were unable to recover it. 00:26:59.183 [2024-07-15 22:43:22.832925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.832935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.833127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.833136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.833401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.833410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.833544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.833555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.833669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.833679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.833865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.833875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.834118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.834128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.834252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.834262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.834386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.834396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.834636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.834645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.834789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.834798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.834924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.834934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.835174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.835184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.835452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.835462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.835598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.835608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.835742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.835751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.835960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.835970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.836172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.836181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.836356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.836367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.836579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.836589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.836782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.836792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.836922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.836932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.837044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.837054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.837159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.837169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.837365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.837376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.837561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.837570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.837677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.837687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.837866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.837875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.838130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.838140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.838331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.838341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.838608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.838617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.838806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.838816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.838946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.838955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.839063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.839073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.839253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.839263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.839384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.839393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.839504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.839513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.839707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.839717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.839907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.839916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.840059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.184 [2024-07-15 22:43:22.840068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.184 qpair failed and we were unable to recover it. 00:26:59.184 [2024-07-15 22:43:22.840262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.840272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.840397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.840407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.840521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.840531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.840648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.840660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.840782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.840792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.840990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.841000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.841222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.841234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.841422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.841432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.841554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.841563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.841758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.841768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.841983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.841992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.842177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.842187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.842457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.842467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.842711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.842721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.842841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.842851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.843028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.843037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.843326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.843336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.843549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.843559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.843653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.843663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.843787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.843796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.843909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.843919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.844160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.844170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.844362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.844371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.844508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.844518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.844645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.844656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.844780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.844790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.844922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.844932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.845107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.845117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.845309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.845319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.845446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.845455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.845697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.845707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.845866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.845875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.846060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.846069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.846250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.846259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.846498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.846508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.846682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.846691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.846943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.846953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.847081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.847090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.847276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.847286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.847489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.847499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.185 [2024-07-15 22:43:22.847763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.185 [2024-07-15 22:43:22.847772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.185 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.848011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.848021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.848204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.848214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.848350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.848362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.848606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.848616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.848862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.848872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.849009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.849018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.849194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.849203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.849456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.849466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.849655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.849665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.849899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.849908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.850069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.850079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.850268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.850278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.850491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.850501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.850730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.850740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.851006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.851016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.851223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.851236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.851485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.851495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.851784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.851793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.851924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.851934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.852129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.852138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.852262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.852272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.852406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.852415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.852547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.852557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.852747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.852757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.852929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.852939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.853050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.853061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.853251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.853261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.853444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.853454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.853671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.853681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.853805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.853814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.854085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.854094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.854317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.854327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.854503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.854512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.854686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.854696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.186 qpair failed and we were unable to recover it. 00:26:59.186 [2024-07-15 22:43:22.854872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.186 [2024-07-15 22:43:22.854881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.855066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.855076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.855260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.855270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.855392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.855401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.855588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.855597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.855787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.855796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.855969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.855979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.856182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.856191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.856323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.856335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.856463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.856473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.856645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.856654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.856898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.856908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.857033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.857042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.857175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.857184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.857402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.857412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.857532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.857542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.857713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.857722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.857861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.857870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.858189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.858198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.858308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.858318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.858497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.858506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.858782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.858791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.858921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.858930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.859067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.859076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.859213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.859222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.859422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.859432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.859671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.859680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.859858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.859868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.859991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.860001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.860213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.860223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.860429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.860439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.860578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.860588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.860831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.860841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.860980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.860989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.861163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.861172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.861407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.861432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.861626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.861640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.861896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.861909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.862158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.862172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.862366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.862379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.862515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.187 [2024-07-15 22:43:22.862528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.187 qpair failed and we were unable to recover it. 00:26:59.187 [2024-07-15 22:43:22.862709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.862723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.862937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.862951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.863199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.863214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.863343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.863357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.863490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.863504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.863694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.863708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.863959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.863972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.864112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.864126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.864263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.864285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.864505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.864519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.864655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.864669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.864928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.864942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.865071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.865084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.865203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.865217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.865474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.865488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.865676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.865690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.865899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.865912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.866056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.866069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.866194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.866208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.866399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.866413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.866561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.866574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.866686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.866702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.866977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.866990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.867124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.867138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.867273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.867287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.867537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.867551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.867689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.867702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.867950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.867964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.868165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.868178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.868298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.868312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.868503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.868517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.868649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.868663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.868847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.868860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.868995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.869009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.869260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.869273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.869574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.869588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.869729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.869742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.870044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.870058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.870198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.870211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.870363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.870374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.870586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.188 [2024-07-15 22:43:22.870596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.188 qpair failed and we were unable to recover it. 00:26:59.188 [2024-07-15 22:43:22.870718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.870727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.870857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.870867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.870975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.870985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.871230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.871240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.871351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.871360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.871540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.871550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.871669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.871678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.871864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.871877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.872146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.872156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.872344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.872354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.872605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.872614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.872790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.872799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.873064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.873074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.873314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.873324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.873583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.873592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.873720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.873729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.873837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.873847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.874040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.874050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.874179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.874188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.874280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.874290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.874434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.874444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.874619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.874629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.874870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.874879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.875092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.875102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.875223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.875237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.875366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.875375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.875516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.875525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.875710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.875720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.875855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.875865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.876054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.876064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.876240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.876250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.876445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.876455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.876722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.876732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.876857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.876866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.877058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.877068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.877254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.877264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.877402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.877412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.877597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.877607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.877794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.877804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.878043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.878052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.189 [2024-07-15 22:43:22.878229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.189 [2024-07-15 22:43:22.878239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.189 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.878378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.878387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.878574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.878584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.878709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.878718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.878988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.878997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.879186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.879196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.879307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.879317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.879511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.879523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.879738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.879747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.879995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.880005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.880135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.880145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.880267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.880277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.880459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.880469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.880642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.880652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.880831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.880841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.880976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.880985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.881178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.881187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.881379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.881389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.881523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.881533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.881711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.881721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.881908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.881918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.882114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.882124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.882317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.882327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.882427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.882436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.882646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.882655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.882835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.882845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.882960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.882970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.883090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.883100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.883218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.883232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.883359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.883369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.883488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.883498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.883771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.883780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.884045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.884054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.884294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.884304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.884485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.884495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.190 qpair failed and we were unable to recover it. 00:26:59.190 [2024-07-15 22:43:22.884696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.190 [2024-07-15 22:43:22.884706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.884831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.884841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.885034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.885043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.885283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.885293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.885416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.885425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.885601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.885611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.885790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.885800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.885961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.885971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.886149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.886158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.886361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.886371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.886575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.886584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.886725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.886735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.886907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.886919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.887039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.887049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.887140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.887150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.887336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.887346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.887533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.887542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.887739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.887749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.887933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.887942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.888067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.888077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.888260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.888270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.888447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.888456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.888706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.888716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.888908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.888918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.889047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.889057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.889197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.889206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.889370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.889380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.889488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.889497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.889614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.889623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.889781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.889791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.890030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.890039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.890233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.890243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.890435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.890445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.890619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.890628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.890884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.890893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.891024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.891034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.891169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.891179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.891301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.891311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.891494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.891503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.891633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.891643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.891759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.191 [2024-07-15 22:43:22.891768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.191 qpair failed and we were unable to recover it. 00:26:59.191 [2024-07-15 22:43:22.891896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.891906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.892093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.892103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.892213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.892222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.892357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.892367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.892476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.892486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.892726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.892736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.892942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.892952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.893097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.893107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.893232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.893242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.893485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.893494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.893601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.893611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.893736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.893748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.893864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.893875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.894116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.894125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.894389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.894399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.894584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.894594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.894674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.894684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.894881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.894890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.895000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.895010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.895184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.895194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.895319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.895330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.895596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.895606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.895799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.895808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.895999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.896008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.896275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.896285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.896603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.896613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.896853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.896862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.897041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.897051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.897187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.897197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.897381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.897391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.897568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.897578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.897730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.897740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.897918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.897928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.898108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.898118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.898305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.898315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.898510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.898519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.898650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.898660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.898835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.898845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.899138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.899148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.899335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.899346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.899602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.192 [2024-07-15 22:43:22.899612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.192 qpair failed and we were unable to recover it. 00:26:59.192 [2024-07-15 22:43:22.899782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.899792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.900002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.900011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.900269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.900279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.900454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.900463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.900641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.900651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.900770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.900780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.900917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.900926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.901112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.901122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.901306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.901316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.901489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.901498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.901627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.901639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.901833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.901842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.901951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.901961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.902147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.902156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.902330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.902340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.902479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.902488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.902612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.902622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.902886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.902895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.903078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.903087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.903294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.903304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.903506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.903516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.903640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.903649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.903788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.903798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.903975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.903985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.904163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.904172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.904366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.904376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.904630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.904639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.904824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.904834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.904965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.904975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.905123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.905133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.905264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.905274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.905408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.905418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.905628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.905638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.905811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.905821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.905929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.905940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.906222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.906236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.906347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.906357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.906547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.906557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.906731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.906741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.906954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.906964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.193 qpair failed and we were unable to recover it. 00:26:59.193 [2024-07-15 22:43:22.907143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.193 [2024-07-15 22:43:22.907153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.907395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.907405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.907580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.907590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.907783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.907792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.907902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.907915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.908123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.908133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.908272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.908283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.908398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.908408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.908679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.908689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.908876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.908886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.909014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.909026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.909147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.909156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.909349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.909359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.909542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.909552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.909665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.909676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.909749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.909759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.909881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.909891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.910086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.910096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.910310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.910321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.910445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.910455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.910593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.910603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.910721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.910731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.910952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.910964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.911110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.911120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.911271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.911282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.911462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.911472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.911669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.911679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.911860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.911869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.912072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.912083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.912295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.912305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.912421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.912432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.912646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.912655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.912862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.912872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.913055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.913065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.913278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.913288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.913471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.913481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.194 qpair failed and we were unable to recover it. 00:26:59.194 [2024-07-15 22:43:22.913614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.194 [2024-07-15 22:43:22.913623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.913818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.913829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.913950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.913959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.914074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.914084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.914268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.914278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.914401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.914410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.914620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.914630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.914755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.914765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.914946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.914958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.915138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.915148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.915279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.915290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.915419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.915429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.915550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.915560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.915675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.915684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.915807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.915819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.915934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.915944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.916156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.916166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.916297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.916308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.916502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.916514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.916638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.916650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.916768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.916778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.916901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.916911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.917107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.917117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.917312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.917322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.917447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.917458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.917645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.917655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.917779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.917789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.917948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.917958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.918080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.918090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.918283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.918293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.918406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.918416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.918633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.918645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.918753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.918763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.918873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.918883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.918995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.919005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.919119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.919129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.919304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.919315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.919566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.919576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.919762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.919771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.919949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.919958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.195 [2024-07-15 22:43:22.920145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.195 [2024-07-15 22:43:22.920156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.195 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.920266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.920287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.920478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.920492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.920609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.920623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.920773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.920787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.920928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.920942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.921058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.921072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.921154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.921168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.921445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.921459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.921651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.921665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.921753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.921766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.921894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.921907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.922024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.922038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.922174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.922188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.922325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.922344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.922529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.922542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.922754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.922769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.922971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.922984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.923108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.923121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.923251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.923266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.923388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.923402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.923513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.923525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.923655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.923666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.923858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.923868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.923989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.923999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.924132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.924142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.924331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.924341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.924643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.924653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.924771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.924781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.924898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.924908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.925159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.925169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.925357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.925367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.925516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.925526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.925702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.925712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.925899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.925909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.926082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.926092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.926207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.926216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.926346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.926356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.926600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.926609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.926731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.926741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.926877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.926887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.196 [2024-07-15 22:43:22.927023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.196 [2024-07-15 22:43:22.927038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.196 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.927231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.927245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.927365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.927378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.927513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.927526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.927647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.927661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.927790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.927803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.927926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.927939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.928125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.928138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.928327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.928341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.928547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.928561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.928694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.928708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.928831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.928844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.928978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.928992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.929122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.929138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.929268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.929282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.929469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.929483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.929600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.929613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.929795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.929809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.930021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.930034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.930151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.930165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.930292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.930306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.930414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.930428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.930619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.930632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.930759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.930772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.930901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.930914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.931105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.931118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.931249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.931263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.931382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.931395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.931523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.931536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.931722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.931735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.931925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.931938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.932142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.932156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.932275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.932289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.932541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.932554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.932764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.932778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.932895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.932908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.933110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.933124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.933326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.933340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.933460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.933473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.933659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.933673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.933803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.933815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.197 qpair failed and we were unable to recover it. 00:26:59.197 [2024-07-15 22:43:22.933928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.197 [2024-07-15 22:43:22.933939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.934111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.934121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.934213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.934223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.934416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.934426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.934556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.934565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.934809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.934820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.935079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.935089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.935201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.935210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.935334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.935344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.935463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.935473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.935594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.935604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.935718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.935727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.935848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.935860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.936061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.936071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.936263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.936273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.936386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.936397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.936577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.936587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.936795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.936805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.936999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.937009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.937118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.937127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.937239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.937249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.937365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.937375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.937573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.937583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.937691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.937700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.937814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.937823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.937906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.937916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.938167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.938177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.938358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.938368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.938491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.938501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.938674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.938685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.938793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.938803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.938938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.938948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.939124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.939134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.939323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.939334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.939458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.939468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.939646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.939656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.939833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.939843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.940024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.940035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.940199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.940210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.940301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.940318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.940507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.940522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.198 qpair failed and we were unable to recover it. 00:26:59.198 [2024-07-15 22:43:22.940775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.198 [2024-07-15 22:43:22.940789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.940976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.940989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.941142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.941156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.941284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.941298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.941436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.941449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.941633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.941646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.941768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.941780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.941859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.941872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.942059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.942072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.942282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.942296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.942511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.942524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.942725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.942746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.942892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.942905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.943111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.943124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.943330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.943345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.943527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.943540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.943672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.943686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.943880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.943895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.944100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.944113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.944240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.944255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.944449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.944463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.944643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.944657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.944845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.944859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.944987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.945000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.945163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.945176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.945307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.945321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.945569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.945582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.945708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.945722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.945849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.945862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.945994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.946007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.946104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.946117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.946258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.946272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.946524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.946537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.946664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.946678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.946808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.946821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.947008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.199 [2024-07-15 22:43:22.947021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.199 qpair failed and we were unable to recover it. 00:26:59.199 [2024-07-15 22:43:22.947201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.947214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.947373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.947387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.947530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.947542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.947658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.947668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.947791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.947802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.947882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.947892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.948006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.948016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.948190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.948200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.948376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.948386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.948509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.948519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.948639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.948649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.948845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.948855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.949039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.949048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.949240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.949253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.949439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.949448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.949566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.949578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.949657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.949666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.949877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.949889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.950011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.950021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.950216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.950229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.950364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.950375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.950500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.950510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.950686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.950697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.950936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.950946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.951139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.951149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.951272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.951282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.951420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.951429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.951568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.951578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.951706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.951715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.951906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.951916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.952039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.952048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.952252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.952262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.952374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.952384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.952515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.952525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.952641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.952651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.952763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.952773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.952919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.952929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.953055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.953064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.953199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.953208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.953323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.953333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.200 [2024-07-15 22:43:22.953468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.200 [2024-07-15 22:43:22.953478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.200 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.953675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.953684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.953815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.953831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.954046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.954060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.954187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.954202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.954346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.954360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.954442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.954455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.954640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.954654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.954801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.954815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.954988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.955003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.955138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.955152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.955338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.955352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.955484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.955498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.955703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.955716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.955836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.955849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.956051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.956065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.956197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.956211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.956413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.956427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.956555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.956567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.956751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.956765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.956903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.956917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.957044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.957058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.957179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.957192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.957383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.957396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.957595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.957608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.957823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.957836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.957966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.957979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.958159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.958173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.958299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.958313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.958442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.958456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.958589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.958602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.958740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.958753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.958934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.958948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.959101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.959115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.959240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.959254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.959397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.959411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.959530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.959543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.959673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.959687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.959817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.959829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.959952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.959961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.960058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.201 [2024-07-15 22:43:22.960068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.201 qpair failed and we were unable to recover it. 00:26:59.201 [2024-07-15 22:43:22.960245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.960255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.960378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.960392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.960577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.960587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.960698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.960708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.960980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.960992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.961101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.961112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.961367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.961378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.961502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.961512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.961635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.961645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.961757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.961767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.961978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.961988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.962170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.962179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.962306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.962317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.962493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.962502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.962693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.962703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.962836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.962847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.962939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.962949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.963069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.963078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.963265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.963275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.963453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.963463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.963600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.963610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.963799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.963808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.963924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.963934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.964027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.964036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.964124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.964135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.964329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.964340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.964449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.964460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.964583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.964593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.964782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.964792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.964918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.964927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.965116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.965127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.965242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.965252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.965432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.965442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.965615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.965625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.965753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.965763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.965881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.965891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.966075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.966084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.966182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.966191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.966318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.966328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.966458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.202 [2024-07-15 22:43:22.966468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.202 qpair failed and we were unable to recover it. 00:26:59.202 [2024-07-15 22:43:22.966584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.966594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.966851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.966863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.966941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.966951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.967152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.967163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.967301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.967312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.967425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.967435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.967524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.967534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.967774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.967785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.967974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.967984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.968158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.968168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.968377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.968387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.968550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.968561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.968693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.968703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.968835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.968845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.968970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.968979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.969107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.969117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.969245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.969255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.969375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.969385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.969496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.969506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.969705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.969715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.969837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.969849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.970042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.970051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.970181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.970191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.970302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.970312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.970429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.970439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.970553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.970563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.970684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.970694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.970821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.970832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.970966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.970979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.971089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.971099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.971214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.971237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.971439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.971449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.971701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.971711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.971889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.971899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.972010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.203 [2024-07-15 22:43:22.972020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.203 qpair failed and we were unable to recover it. 00:26:59.203 [2024-07-15 22:43:22.972133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.972144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.972326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.972337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.972447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.972457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.972581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.972592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.972786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.972796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.972973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.972985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.973170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.973182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.973304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.973314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.973439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.973449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.973630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.973640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.973824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.973836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.973946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.973955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.974194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.974203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.974384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.974394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.974563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.974573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.974747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.974757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.974882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.974892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.975064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.975075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.975269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.975280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.975522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.975532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.975730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.975739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.975989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.976000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.976111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.976120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.976314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.976325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.976464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.976473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.976649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.976660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.976782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.976792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.976921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.976931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.977104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.977114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.977306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.977317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.977503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.977513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.977744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.977754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.977863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.977873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.977995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.978005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.978127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.978137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.978268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.978279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.978391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.978401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.978533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.978544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.978653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.978663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.978774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.978784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.204 qpair failed and we were unable to recover it. 00:26:59.204 [2024-07-15 22:43:22.978910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.204 [2024-07-15 22:43:22.978920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.979032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.979042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.979244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.979255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.979384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.979394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.979509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.979520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.979637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.979648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.979845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.979857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.979969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.979978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.980095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.980104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.980296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.980306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.980422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.980433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.980556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.980565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.980765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.980774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.980912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.980921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.981059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.981068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.981254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.981264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.981387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.981397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.981508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.981518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.981702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.981712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.981834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.981843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.981970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.981980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.982106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.982115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.982235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.982245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.982369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.982379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.982570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.982580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.982694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.982704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.982895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.982905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.983086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.983096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.983208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.983217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.983368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.983378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.983497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.983507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.983617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.983627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.983734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.983745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.983820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.983830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.983947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.983957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.984084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.984094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.984213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.984223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.984433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.984443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.984628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.984638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.984759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.984769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.984948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.205 [2024-07-15 22:43:22.984958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.205 qpair failed and we were unable to recover it. 00:26:59.205 [2024-07-15 22:43:22.985070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.985080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.985266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.985276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.985398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.985411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.985485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.985494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.985610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.985620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.985743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.985755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.985868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.985878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.986042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.986052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.986238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.986249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.986374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.986384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.986574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.986584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.986703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.986712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.986833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.986843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.986951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.986961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.987081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.987091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.987212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.987222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.987345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.987355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.987529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.987540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.987651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.987661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.987802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.987813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.987935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.987945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.988132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.988143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.988268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.988278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.988495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.988505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.988615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.988625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.988746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.988756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.988878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.988888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.988997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.989007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.989134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.989145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.989261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.989272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.989383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.989394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.989525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.989534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.989728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.989737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.989849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.989859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.990029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.990039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.990150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.990159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.990349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.990359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.990468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.990478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.990661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.990671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.990791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.990800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.990900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.206 [2024-07-15 22:43:22.990910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.206 qpair failed and we were unable to recover it. 00:26:59.206 [2024-07-15 22:43:22.991027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.991037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.991179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.991189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.991319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.991330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.991511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.991522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.991692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.991705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.991814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.991824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.991939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.991949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.992144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.992155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.992300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.992310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.992426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.992436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.992621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.992630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.992816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.992827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.992941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.992952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.993150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.993161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.993279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.993289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.993406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.993417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.993531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.993540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.993719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.993729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.993950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.993960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.994139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.994150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.994269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.994279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.994401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.994410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.994635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.994645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.994739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.994750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.994818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.994828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.995100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.995110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.995301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.995311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.995422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.995433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.995617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.995626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.995878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.995887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.996016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.996026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.996133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.996143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.996269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.996279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.996407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.996417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.996527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.996537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.207 [2024-07-15 22:43:22.996611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.207 [2024-07-15 22:43:22.996621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.207 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.996733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.996743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.996848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.996858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.997037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.997047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.997159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.997169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.997280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.997290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.997404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.997415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.997524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.997534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.997712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.997722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.997822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.997834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.998040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.998050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.998162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.998171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.998297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.998308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.998431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.998441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.998546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.998556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.998758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.998769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.998898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.998908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.999047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.999057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.999235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.999246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.999369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.999379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.999505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.999516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.999643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.999654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.999786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.999796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:22.999909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:22.999919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.000064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.000075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.000195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.000206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.000330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.000341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.000460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.000469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.000546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.000556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.000729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.000739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.000854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.000864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.001042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.001053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.001325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.001335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.001461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.001471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.001618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.001628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.001743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.001753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.001880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.001890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.002010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.002019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.002151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.002163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.002256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.002266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.002386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.002397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.208 qpair failed and we were unable to recover it. 00:26:59.208 [2024-07-15 22:43:23.002464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.208 [2024-07-15 22:43:23.002474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.002599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.002609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.002728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.002738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.002855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.002866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.003006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.003016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.003135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.003145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.003282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.003293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.003470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.003480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.003601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.003613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.003795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.003805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.003927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.003936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.004056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.004066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.004183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.004193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.004321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.004332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.004467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.004477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.004593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.004603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.004784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.004794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.004970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.004979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.005089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.005098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.005234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.005244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.005354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.005363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.005481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.005492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.005673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.005684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.005797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.005807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.005999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.006008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.006133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.006142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.006313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.006323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.006539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.006550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.006729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.006739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.006934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.006943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.007139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.007149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.007265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.007275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.007390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.007400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.007589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.007599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.007798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.007807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.007929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.007939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.008054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.008063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.008155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.008164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.008378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.008389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.008501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.008511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.008615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.209 [2024-07-15 22:43:23.008625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.209 qpair failed and we were unable to recover it. 00:26:59.209 [2024-07-15 22:43:23.008750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.008759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.008933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.008943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.009051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.009062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.009244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.009254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.009429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.009439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.009561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.009571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.009816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.009826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.009960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.009972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.010087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.010097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.010344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.010354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.010452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.010462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.010582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.010592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.010772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.010783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.011042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.011052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.011265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.011276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.011392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.011401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.011514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.011523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.011702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.011712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.011906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.011916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.012038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.012048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.012161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.012171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.012293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.012304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.012421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.012431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.012609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.012618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.012713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.012722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.012934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.012944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.013137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.013147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.013338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.013348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.013464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.013474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.013607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.013617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.013875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.013886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.014008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.014018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.014127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.014137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.014261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.014271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.014406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.014416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.014541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.014551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.014667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.014677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.014856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.014867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.014994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.015003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.015186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.015195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.210 [2024-07-15 22:43:23.015377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.210 [2024-07-15 22:43:23.015387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.210 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.015588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.015598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.015712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.015722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.015897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.015907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.016017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.016027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.016137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.016147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.016271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.016281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.016460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.016472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.016593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.016603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.016794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.016804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.016915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.016925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.017098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.017107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.017298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.017308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.017481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.017491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.017610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.017621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.017817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.017826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.018001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.018010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.018138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.018148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.018240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.018250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.018522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.018532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.018637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.018646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.018761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.018771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.018879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.018890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.018964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.018973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.019078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.019088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.019209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.019218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.019343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.019353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.019468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.019477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.019587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.019598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.019787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.019796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.019934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.019945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.020019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.020028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.020222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.020236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.020348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.020359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.020482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.020492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.020609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.020619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.020868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.020878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.021000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.021010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.021111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.021121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.021310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.021320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.021432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.211 [2024-07-15 22:43:23.021441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.211 qpair failed and we were unable to recover it. 00:26:59.211 [2024-07-15 22:43:23.021579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.021589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.021697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.021707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.021847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.021857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.021972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.021981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.022158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.022169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.022344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.022355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.022496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.022509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.022620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.022630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.022747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.022756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.022841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.022850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.022967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.022977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.023103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.023112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.023220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.023243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.023372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.023382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.023503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.023512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.023697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.023707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.023846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.023856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.024042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.024052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.024160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.024170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.024291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.024302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.024424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.024434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.024564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.024575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.024670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.024679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.024887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.024896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.025022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.025032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.025215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.025229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.025352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.025361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.025483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.025494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.025614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.025624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.025801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.025812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.025922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.025933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.026046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.026056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.026180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.026190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.026308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.026318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.026491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.026501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.212 qpair failed and we were unable to recover it. 00:26:59.212 [2024-07-15 22:43:23.026576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.212 [2024-07-15 22:43:23.026586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.026699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.026708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.026783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.026793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.026898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.026908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.027093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.027103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.027244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.027254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.027340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.027349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.027455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.027464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.027655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.027665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.027906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.027917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.028096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.028106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.028232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.028244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.028373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.028382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.028502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.028511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.028623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.028633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.028811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.028821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.028944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.028954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.029137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.029146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.029265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.029275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.029379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.029389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.029511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.029520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.029603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.029612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.029792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.029802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.029977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.029987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.030110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.030119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.030296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.030307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.030420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.030429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.030603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.030612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.030789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.030798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.031063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.031072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.031255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.031265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.031395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.031404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.031532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.031542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.031710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.031720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.031901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.031911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.032021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.213 [2024-07-15 22:43:23.032031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.213 qpair failed and we were unable to recover it. 00:26:59.213 [2024-07-15 22:43:23.032232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.032242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.032353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.032363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.032548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.032558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.032676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.032686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.032851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.032861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.033123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.033133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.033258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.033268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.033448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.033458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.033699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.033709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.033832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.033841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.033954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.033964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.034088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.034098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.034217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.034230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.034354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.034365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.034489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.034499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.034677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.034687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.034812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.034822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.034934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.034944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.035071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.035081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.035259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.035269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.035342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.035351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.035576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.035586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.035715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.035724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.035845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.035855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.036027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.036037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.036171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.036181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.036307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.036318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.036426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.036436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.036558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.036568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.036646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.036656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.036834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.036844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.037026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.037036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.037162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.037172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.037348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.037358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.037475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.037486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.214 [2024-07-15 22:43:23.037615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.214 [2024-07-15 22:43:23.037624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.214 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.037818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.037827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.037962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.037972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.038146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.038156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.038335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.038346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.038524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.038534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.038733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.038745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.038875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.038889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.038974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.038984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.039157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.039167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.039340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.039350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.039579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.039588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.039713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.039722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.039903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.039912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.040022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.040032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.040156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.040166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.040349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.040359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.040473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.040483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.040609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.040619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.040728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.040738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.040848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.040857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.041044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.041054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.041184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.041195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.041320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.041331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.041448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.041459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.041580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.041590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.041700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.041710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.041961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.041971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.042154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.042163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.042303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.042313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.042443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.042454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.042581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.042591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.042727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.042737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.042923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.042932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.043121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.043131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.043309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.043320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.043510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.043519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.043616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.043626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.043749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.043759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.043870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.043879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.043988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.215 [2024-07-15 22:43:23.043997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.215 qpair failed and we were unable to recover it. 00:26:59.215 [2024-07-15 22:43:23.044111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.044121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.044243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.044254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.044483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.044494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.044636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.044646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.044830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.044840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.044961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.044971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.045086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.045098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.045278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.045288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.045414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.045425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.045587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.045596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.045718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.045728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.045841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.045851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.045974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.045983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.046105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.046116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.046252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.046262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.046370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.046380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.046558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.046569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.046749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.046760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.046872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.046882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.046954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.046964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.047087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.047097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.047261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.047272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.047394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.047403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.047513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.047522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.216 [2024-07-15 22:43:23.047593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.216 [2024-07-15 22:43:23.047603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.216 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.047782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.047792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.047906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.047916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.048042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.048053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.048233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.048243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.048378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.048388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.048512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.048522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.048629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.048639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.048829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.048839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.049014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.049024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.049128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.049138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.049263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.049273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.049393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.049403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.049525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.049535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.049668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.049678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.049754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.049764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.049948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.049959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.050076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.050086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.050354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.050364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.050474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.050484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.050724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.050734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.050856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.050866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.050985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.050997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.051171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.051181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.051294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.051305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.051482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.051492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.051605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.051616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.051816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.051826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.052018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.052027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.052153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.052163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.052292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.052302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.052440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.052450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.052566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.052577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.052779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.052788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.052932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.052941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.053053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.053062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.053189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.053199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.053316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.053326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.053439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.053449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.053609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.053618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.053744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.053754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.053832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.053841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.217 [2024-07-15 22:43:23.053975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.217 [2024-07-15 22:43:23.053985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.217 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.054211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.054220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.054416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.054426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.054626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.054635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.054766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.054776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.054887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.054896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.055022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.055032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.055162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.055173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.055303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.055313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.055554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.055564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.055712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.055722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.055856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.055866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.056061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.056071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.056192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.056202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.056310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.056320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.056497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.056507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.056613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.056623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.056738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.056747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.056926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.056936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.057128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.057138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.057268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.057280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.057407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.057417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.057534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.057544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.057630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.057639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.057764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.057774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.057906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.057916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.058105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.058115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.058309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.058319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.058438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.058448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.058577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.058586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.058693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.058702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.058829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.058839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.058953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.058963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.059142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.059152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.218 [2024-07-15 22:43:23.059276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.218 [2024-07-15 22:43:23.059287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.218 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.059478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.059488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.059679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.059690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.059847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.059856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.060115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.060125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.060253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.060264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.060393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.060402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.060647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.060657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.060834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.060845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.060968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.060978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.061098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.061109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.061303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.061313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.061507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.061517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.061610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.061620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.061826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.061837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.061951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.061961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.062207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.062217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.062417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.062428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.062543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.062553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.062661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.062671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.062788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.062798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.062884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.062893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.063070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.063080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.063255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.063265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.063462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.063472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.063613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.063623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.063813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.063824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.063935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.063944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.064060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.064070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.064179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.064189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.064306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.064317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.064441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.064451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.064639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.064649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.064776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.064787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.065030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.065040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.065165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.065175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.065295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.065306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.065438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.065447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.065628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.065638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.065764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.065774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.065975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.065985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.066125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.066134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.066245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.066256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.066373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.219 [2024-07-15 22:43:23.066384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.219 qpair failed and we were unable to recover it. 00:26:59.219 [2024-07-15 22:43:23.066693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.066705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.066893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.066903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.067021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.067033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.067151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.067161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.067350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.067361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.067572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.067582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.067712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.067722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.067915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.067924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.068053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.068062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.068182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.068192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.068320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.068331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.068424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.068434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.068608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.068618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.068743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.068753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.068855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.068866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.069047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.069057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.069255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.069265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.069355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.069365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.069540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.069550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.069637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.069647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.069762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.069772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.069897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.069907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.070098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.070109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.070209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.070219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.070309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.070319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.070447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.070457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.070577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.070587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.070654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.070664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.070778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.070788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.070946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.070956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.071150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.071160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.071250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.071260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.071375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.071385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.071578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.071588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.071716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.071727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.071925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.071936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.072057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.220 [2024-07-15 22:43:23.072066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.220 qpair failed and we were unable to recover it. 00:26:59.220 [2024-07-15 22:43:23.072214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.072243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.072392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.072402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.072514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.072524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.072652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.072662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.072775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.072785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.072911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.072921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.073097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.073107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.073294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.073305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.073494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.073504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.073625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.073636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.073741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.073751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.073946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.073956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.074073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.074083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.074291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.074302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.074422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.074432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.074610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.074619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.074721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.074730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.074845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.074855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.075050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.075060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.075177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.075187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.075317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.075328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.075526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.075537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.075664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.075675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.075784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.075793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.075969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.075979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.076171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.076183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.076327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.076338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.076448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.076458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.076646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.076656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.076787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.221 [2024-07-15 22:43:23.076797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.221 qpair failed and we were unable to recover it. 00:26:59.221 [2024-07-15 22:43:23.076907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.076917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.077032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.077042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.077205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.077215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.077332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.077342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.077462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.077472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.077584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.077594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.077720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.077730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.077864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.077874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.078008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.078017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.078195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.078205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.078385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.078395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.078507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.078517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.078634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.078643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.078815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.078825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.078999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.079009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.079256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.079268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.079375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.079384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.079489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.079500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.079574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.079584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.079692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.079703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.079792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.079802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.079925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.079934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.080046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.080056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.080323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.080334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.080460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.080470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.080645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.080656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.080865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.080876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.081030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.081041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.081222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.081236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.081425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.081435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.081543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.081553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.081676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.081686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.081794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.081804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.081923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.081933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.082041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.082051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.082161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.082173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.082354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.222 [2024-07-15 22:43:23.082365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.222 qpair failed and we were unable to recover it. 00:26:59.222 [2024-07-15 22:43:23.082481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.082491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.082606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.082616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.082794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.082806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.082883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.082893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.083003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.083013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.083202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.083213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.083333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.083344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.083470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.083481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.083590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.083601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.083706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.083716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.083835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.083846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.084030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.084040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.084160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.084171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.084288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.084299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.084428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.084438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.084553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.084564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.084687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.084699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.084790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.084800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.084936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.084946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.085125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.085136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.085318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.085329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.085441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.085451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.085570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.085580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.085757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.085767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.085878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.085888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.086001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.086011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.086195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.086205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.086355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.086366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.086547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.086557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.086726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.086737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.086854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.086865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.087039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.087049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.087163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.087174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.087283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.087294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.087409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.087420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.087541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.087551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.223 [2024-07-15 22:43:23.087665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.223 [2024-07-15 22:43:23.087677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.223 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.087807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.087818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.087938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.087951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.088059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.088069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.088272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.088283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.088466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.088477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.088595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.088605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.088735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.088745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.088858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.088868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.089045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.089055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.089235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.089246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.089364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.089374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.089483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.089494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.089678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.089693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.089872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.089883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.090020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.090031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.090208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.090219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.090342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.090352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.090458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.090469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.090582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.090592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.090774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.090784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.090899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.090909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.091096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.091105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.091221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.091235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.091346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.091356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.091478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.091488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.091675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.091685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.091817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.091826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.091998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.092008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.092139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.092151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.092333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.092343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.092454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.092464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.092603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.092613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.092802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.092813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.092988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.092999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.093129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.093138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.093256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.093268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.093388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.093398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.093520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.093530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.093774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.093784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.093903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.224 [2024-07-15 22:43:23.093913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.224 qpair failed and we were unable to recover it. 00:26:59.224 [2024-07-15 22:43:23.094141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.094151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.094279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.094292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.094419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.094429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.094538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.094549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.094670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.094681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.094861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.094871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.095044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.095055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.095181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.095191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.095303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.095314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.095427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.095439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.095545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.095555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.095664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.095675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.095862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.095873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.096052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.096062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.096170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.096180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.096315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.096327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.096446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.096458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.096569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.096579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.096719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.096729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.096835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.096845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.096956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.096967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.097058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.097068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.097248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.097258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.097328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.097340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.097433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.097443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.097583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.097594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.097714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.097724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.097909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.097920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.098038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.098049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.098290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.098301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.098420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.098430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.098618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.098628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.098805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.098815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.099081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.225 [2024-07-15 22:43:23.099092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.225 qpair failed and we were unable to recover it. 00:26:59.225 [2024-07-15 22:43:23.099276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.099288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.099502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.099512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.099760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.099770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.099902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.099913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.100101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.100112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.100233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.100251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.100379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.100390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.100582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.100594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.100718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.100727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.100925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.100936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.101138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.101148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.101380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.101391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.101508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.101518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.101695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.101706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.101898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.101908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.102030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.102041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.102233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.102244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.102419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.102429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.102577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.102587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.102706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.102717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.102827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.102837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.103016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.103025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.103141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.103152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.103344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.103355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.103479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.103490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.103619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.103630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.103754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.103764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.103908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.103919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.104031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.104041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.104155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.104165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.104268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.104279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.104402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.104413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.104534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.226 [2024-07-15 22:43:23.104544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.226 qpair failed and we were unable to recover it. 00:26:59.226 [2024-07-15 22:43:23.104658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.104668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.104796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.104808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.104990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.105001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.105176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.105186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.105299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.105310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.105385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.105396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.105583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.105593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.105757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.105767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.105929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.105939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.106124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.106134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.106348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.106359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.106480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.106491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.106625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.106635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.106822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.106832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.106976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.106988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.107163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.107173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.107293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.107304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.107422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.107432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.107613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.107623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.107750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.107768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.107883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.107894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.108022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.108033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.108280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.108291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.108401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.108412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.108591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.108602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.108711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.108722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.108948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.108959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.109147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.109157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.109244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.109254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.109479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.109489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.109610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.109621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.109849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.109859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.109980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.109990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.110065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.110075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.110169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.110180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.110296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.110306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.110426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.110437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.227 [2024-07-15 22:43:23.110553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.227 [2024-07-15 22:43:23.110564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.227 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.110700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.110712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.110916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.110926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.111171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.111182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.111295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.111306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.111445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.111455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.111633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.111644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.111819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.111830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.112005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.112016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.112213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.112235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.112429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.112440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.112549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.112559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.112695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.112705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.112918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.112928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.113053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.113063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.113247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.113258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.113376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.113386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.113494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.113505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.113630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.113641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.113747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.113757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.113963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.113973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.114156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.114167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.114246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.114256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.114461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.114472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.114650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.114660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.114857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.114867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.115049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.115059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.115187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.115197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.115324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.115335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.115455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.115465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.115587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.115597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.115708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.115717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.115823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.115832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.115979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.115988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.116115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.116125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.116242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.116252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.116370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.116381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.116512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.116522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.116697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.116708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.116901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.116912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.117025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.228 [2024-07-15 22:43:23.117036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.228 qpair failed and we were unable to recover it. 00:26:59.228 [2024-07-15 22:43:23.117161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.117172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.117294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.117306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.117421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.117432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.117607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.117619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.117784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.117794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.117989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.117999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.118119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.118129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.118303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.118314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.118439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.118450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.118633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.118643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.118830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.118840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.118957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.118967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.119080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.119090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.119263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.119273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.119394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.119404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.119540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.119550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.119736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.119747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.119969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.119980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.120113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.120124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.120304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.120314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.120442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.120453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.120617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.120627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.120752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.120762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.120877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.120888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.121001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.121011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.121136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.121146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.121286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.121297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.121482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.121493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.121610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.121621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.121811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.121820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.122015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.122025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.122132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.122142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.122308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.122319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.122415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.122425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.229 qpair failed and we were unable to recover it. 00:26:59.229 [2024-07-15 22:43:23.122546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.229 [2024-07-15 22:43:23.122556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.122795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.122805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.122932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.122943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.123053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.123063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.123188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.123199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.123387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.123397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.123576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.123587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.123733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.123743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.123971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.123982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.124109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.124122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.124316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.124327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.124436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.124446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.124624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.124635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.124747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.124757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.124973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.124984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.125180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.125190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.230 [2024-07-15 22:43:23.125320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.230 [2024-07-15 22:43:23.125331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.230 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.125455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.125466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.125589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.125602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.125728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.125740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.125928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.125939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.126187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.126197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.126314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.126325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.126462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.126473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.126582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.126593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.126733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.126743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.126923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.126934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.127146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.127157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.127353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.127364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.127493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.127503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.127626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.127636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.127765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.127775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.127910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.127920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.128063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.128073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.128190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.128201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.128325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.128335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.128453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.128464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.128571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.128582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.128762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.128773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.128958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.128969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.129090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.129100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.129217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.129240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.129352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.129363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.129477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.129487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.129617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.129628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.129749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.129759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.129833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.129844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.130022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.130032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.130157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.130167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.130260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.130273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.130387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.130398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.130515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.130526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.130716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.130726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.130903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.130913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.131024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.131035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.131219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.131234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.131409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.131420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.131539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.131550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.131738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.131748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.131872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.131883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.132008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.132018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.132152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.132163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.132281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.132292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.132536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.132546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.132630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.132641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.132769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.132781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.132901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.132917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.133111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.133121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.133233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.133244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.133362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.133372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.133548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.133558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.133669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.133679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.133787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.133798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.133974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.133984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.134091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.134101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.134234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.134245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.134358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.501 [2024-07-15 22:43:23.134370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.501 qpair failed and we were unable to recover it. 00:26:59.501 [2024-07-15 22:43:23.134556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.134567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.134760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.134770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.134961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.134971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.135086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.135097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.135210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.135220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.135400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.135410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.135533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.135544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.135666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.135676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.135808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.135818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.135940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.135950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.136064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.136073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.136328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.136339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.136454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.136467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.136598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.136609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.136723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.136733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.136849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.136859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.136983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.136993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.137171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.137182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.137372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.137383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.137490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.137500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.137620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.137630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.137721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.137731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.137913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.137924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.138038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.138049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.138171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.138182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.138300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.138311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.138450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.138461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.138585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.138596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.138715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.138726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.138858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.138868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.138982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.138992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.139103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.139113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.139297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.139308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.139504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.139514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.139628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.139637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.139775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.139785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.139901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.139911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.140090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.140100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.140206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.140216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.140331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.140341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.140518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.140528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.140637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.140646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.140823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.140833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.141043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.141053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.141160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.141170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.141341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.141351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.141461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.141471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.141658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.141667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.141780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.141790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.141920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.141930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.142123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.142132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.142243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.142253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.142428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.142440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.142513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.142524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.142739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.142748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.142848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.142857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.142972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.502 [2024-07-15 22:43:23.142981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.502 qpair failed and we were unable to recover it. 00:26:59.502 [2024-07-15 22:43:23.143120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.143130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.143242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.143253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.143375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.143385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.143496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.143506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.143649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.143659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.143881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.143891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.144126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.144135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.144313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.144323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.144507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.144517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.144705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.144714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.144849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.144858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.144945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.144955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.145076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.145086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.145214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.145241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.145353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.145363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.145472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.145483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.145602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.145612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.145724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.145734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.145851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.145861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.145978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.145988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.146111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.146121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.146235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.146245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.146432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.146442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.146560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.146570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.146683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.146693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.146821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.146830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.147016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.147025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.147274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.147285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.147466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.147476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.147595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.147605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.147783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.147793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.147903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.147912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.148038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.148048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.148236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.148247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.148434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.148444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.148574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.148585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.148776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.148787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.148905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.148914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.149038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.149047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.149172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.149181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.149311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.149320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.149442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.149451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.149575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.149585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.149704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.149713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.149980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.149990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.150169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.150178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.150328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.150338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.150448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.150458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.150590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.150600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.150758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.150768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.150846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.150856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.503 [2024-07-15 22:43:23.150992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.503 [2024-07-15 22:43:23.151002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.503 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.151114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.151124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.151317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.151327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.151453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.151463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.151649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.151660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.151838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.151848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.151962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.151972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.152086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.152096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.152249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.152259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.152394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.152404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.152604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.152614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.152743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.152753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.152853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.152863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.153043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.153052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.153163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.153172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.153299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.153310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.153482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.153493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.153734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.153744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.153860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.153869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.153994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.154004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.154138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.154148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.154259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.154270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.154383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.154392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.154580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.154589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.154697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.154709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.154823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.154833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.154953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.154963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.155073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.155082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.155215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.155229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.155426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.155436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.155557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.155567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.155696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.155706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.155837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.155847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.155995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.156004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.156117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.156126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.156249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.156260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.156379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.156389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.156569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.156579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.156692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.156701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.156781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.156790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.156981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.156991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.157103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.157113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.157223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.157237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.157444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.157454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.157539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.157548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.157676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.157686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.157795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.157804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.157938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.157947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.158085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.158095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.158281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.158292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.158410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.158420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.158543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.158553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.158742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.158752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.158848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.158858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.158981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.158990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.159240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.159250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.159432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.159442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.159619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.504 [2024-07-15 22:43:23.159629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.504 qpair failed and we were unable to recover it. 00:26:59.504 [2024-07-15 22:43:23.159717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.159727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.159908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.159917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.160023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.160033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.160146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.160155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.160337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.160348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.160593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.160603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.160712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.160723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.160847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.160858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.160977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.160987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.161235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.161246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.161367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.161377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.161500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.161512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.161631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.161641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.161741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.161751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.161932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.161942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.162054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.162064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.162183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.162193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.162376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.162386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.162560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.162570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.162759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.162770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.162945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.162954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.163110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.163120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.163243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.163253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.163454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.163464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.163650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.163660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.163839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.163849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.164029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.164039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.164164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.164174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.164274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.164284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.164375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.164385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.164502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.164512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.164630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.164640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.164813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.164823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.164959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.164988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.165132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.165147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.165249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.165264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.165383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.165398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.165536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.165550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.165686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.165700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.165913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.165927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.166051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.166064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.166182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.166196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.166380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.166394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.166490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.166504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.166634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.166648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.166822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.166836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.166927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.166941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.167085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.167099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.167283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.167297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.167417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.167431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.167544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.167558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.167690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.167704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.167822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.167836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.168038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.168052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.168236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.168250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.168468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.168483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.168739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.168753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.169018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.169032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.169163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.169177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.169316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.169331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.169468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.169485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.505 qpair failed and we were unable to recover it. 00:26:59.505 [2024-07-15 22:43:23.169676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.505 [2024-07-15 22:43:23.169690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.169818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.169832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.169953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.169966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.170094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.170108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.170235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.170249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.170472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.170486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.170757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.170771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.170882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.170897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.171106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.171120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.171257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.171272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.171526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.171540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.171661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.171675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.171797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.171811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.171929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.171943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:59.506 [2024-07-15 22:43:23.172066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.172090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.172203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.172216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:59.506 [2024-07-15 22:43:23.172467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.172485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.172607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.172620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.506 [2024-07-15 22:43:23.172808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.172824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.172946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.172961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:59.506 [2024-07-15 22:43:23.173082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.173096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.506 [2024-07-15 22:43:23.173289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.173306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.173520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.173533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.173668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.173681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.173939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.173956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.174086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.174100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.174221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.174241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.174363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.174378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.174576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.174589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.174831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.174844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.174971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.174985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.175098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.175112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.175256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.175270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.175403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.175417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.175586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.175600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.175802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.175816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.175940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.175953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.176143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.176158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.176357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.176372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.176578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.176592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.176725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.176739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.176874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.176890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.177092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.177107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.177230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.177244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.177456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.177471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.177610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.177623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.177743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.177758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.177895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.177908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.178043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.178057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.178334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.178348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.178488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.178503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.178633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.178648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.178781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.178795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.506 [2024-07-15 22:43:23.178924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.506 [2024-07-15 22:43:23.178938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.506 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.179055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.179070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.179263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.179278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.179413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.179426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.179625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.179639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.179844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.179858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.180071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.180085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.180236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.180250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.180436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.180450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.180641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.180655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.180791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.180804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.180995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.181008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.181151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.181165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.181304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.181327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.181451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.181463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.181655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.181670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.181910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.181923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.182059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.182073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.182273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.182287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.182478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.182493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.182643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.182658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.182780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.182794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.182934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.182948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.183087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.183101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.183244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.183259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.183383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.183397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.183522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.183537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.183740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.183756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.183883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.183898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.184098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.184112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.184302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.184317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.184448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.184464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.184613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.184626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.184756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.184770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.184889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.184904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.185029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.185042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.185159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.185173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.185459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.185473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.185600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.185613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.185751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.185764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.185889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.185899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.186030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.186040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.186176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.186187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.186318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.186329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.186457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.186468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.186596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.186607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.186722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.186733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.186918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.186930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.187054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.187063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.187185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.187195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.187317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.187327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.187518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.187528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.187664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.187677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.187788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.187799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.187977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.187986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.188087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.188096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.188278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.188289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.188581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.188593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.188767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.188777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.188971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.188981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.189106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.189117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.189251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.507 [2024-07-15 22:43:23.189261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.507 qpair failed and we were unable to recover it. 00:26:59.507 [2024-07-15 22:43:23.189435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.189444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.189557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.189567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.189683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.189692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.189797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.189807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.189933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.189942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.190056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.190066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.190295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.190306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.190418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.190429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.190524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.190534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.190660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.190669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.190769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.190780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.190895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.190905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.191022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.191031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.191116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.191126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.191253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.191263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.191382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.191393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.191522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.191534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.191660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.191672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.191781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.191792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.191924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.191934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.192050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.192060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.192174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.192183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.192317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.192327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.192449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.192459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.192567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.192577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.192690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.192699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.192812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.192823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.192943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.192954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.193063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.193074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.193207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.193219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.193342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.193353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.193467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.193477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.193606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.193617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.193726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.193737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.193858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.193870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.193995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.194006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.194122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.194132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.194257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.194268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.194390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.194400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.194526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.194536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.194656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.194666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.194770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.194780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.195059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.195069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.195320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.195331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.195480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.195490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.195604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.195614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.195757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.195767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.195945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.195955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.196100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.196110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.196253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.196263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.196402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.196412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.196551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.196562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.196730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.196742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.196955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.196966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.197186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.197196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.197349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.197360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.197506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.197516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.197661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.197676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.197809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.197819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.198058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.198069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.198257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.198267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.508 [2024-07-15 22:43:23.198389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.508 [2024-07-15 22:43:23.198399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.508 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.198525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.198535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.198717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.198728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.198849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.198859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.199088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.199098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.199244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.199254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.199368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.199378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.199510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.199520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.199754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.199764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.199913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.199923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.200115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.200125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.200274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.200285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.200399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.200410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.200538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.200548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.200748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.200759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.200889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.200899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.201038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.201048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.201160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.201170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.201299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.201310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.201422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.201432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.201561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.201571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.201695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.201705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.201849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.201859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.201963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.201973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.202099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.202109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.202248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.202258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.202399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.202409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.202521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.202531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.202707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.202717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.202839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.202849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.203034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.203045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.203160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.203170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.203296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.203307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.203444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.203455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.203583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.203593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.203860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.203869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.203984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.203996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.204114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.204126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.204241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.204252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.204386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.204397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.204578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.204588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.204705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.204716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.204905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.204915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.205023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.205032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.205222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.205235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.205379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.205389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.205650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.205663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.205794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.205804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.205951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.205961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.206142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.206152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.206284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.206295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.206494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.206504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.206641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.206651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.206785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.206796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.206979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.206991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.207168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.207177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.207395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.207405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.207547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.207557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.207754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.207766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.509 [2024-07-15 22:43:23.208040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.208051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:59.509 [2024-07-15 22:43:23.208182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.208196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.208386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.208397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.509 [2024-07-15 22:43:23.208547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.208561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 [2024-07-15 22:43:23.208771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.509 [2024-07-15 22:43:23.208783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.509 qpair failed and we were unable to recover it. 00:26:59.509 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.509 [2024-07-15 22:43:23.209025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.209037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.209284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.209295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.209482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.209493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.209626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.209636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.209862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.209872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.210101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.210111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.210407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.210418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.210543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.210554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.210771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.210782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.210966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.210976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.211243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.211256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.211399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.211409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.211631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.211642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.211773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.211782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.212072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.212082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.212332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.212342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.212543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.212552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.212751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.212760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.212966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.212975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.213242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.213252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.213387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.213397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.213519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.213529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.213711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.213720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.213919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.213929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.214107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.214117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.214410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.214420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.214616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.214626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.214813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.214823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.215024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.215034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.215248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.215258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.215397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.215407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.215532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.215542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.215737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.215747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.216002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.216012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.216275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.216285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.216417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.216427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.216600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.216610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.216798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.216808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.217067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.217076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.217263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.217273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.217458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.217468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.217608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.217618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.217825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.217837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.218040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.218050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.218181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.218190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.218366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.218377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.218522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.218531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.218679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.218689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.218817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.218827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.219012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.219022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.219301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.219314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.219508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.219518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.219761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.219771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.219984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.219994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.220291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.220302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.220445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.220454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.220661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.220671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.220807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.220817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.221112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.221123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.221391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.221402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.221544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.221554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.221699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.221709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.221852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.221862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.222015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.222025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.222233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.222244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.222394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.222404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.222638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.222649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.222835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.222846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.223131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.223142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.223405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.223417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.223613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.223624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.223766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.223777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.223991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.224002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.224193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.224204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.224420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.224432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.224579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.224589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 [2024-07-15 22:43:23.224730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.224740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.510 qpair failed and we were unable to recover it. 00:26:59.510 A controller has encountered a failure and is being reset. 00:26:59.510 [2024-07-15 22:43:23.225008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.510 [2024-07-15 22:43:23.225047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4280000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.225292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.225314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.225459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.225474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.225660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.225674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.225874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.225888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.226125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.226138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.226332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.226347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.226576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.226589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.226746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.226760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.226972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.226986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.227184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.227197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.227506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.227520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.227669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.227683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.227830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.227847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.228095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.228108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.228304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.228319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.228462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.228475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.228678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.228693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.228842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.228855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 Malloc0 00:26:59.511 [2024-07-15 22:43:23.229146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.229159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.229372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.229386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.229513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.229527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.511 [2024-07-15 22:43:23.229674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.229688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.229879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.229892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:59.511 [2024-07-15 22:43:23.230161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.230176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.511 [2024-07-15 22:43:23.230423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.230441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.230592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.511 [2024-07-15 22:43:23.230606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.230805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.230819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.230960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.230974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.231184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.231197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.231483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.231497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.231630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.231643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.231843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.231857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.232120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.232133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.232326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.232340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.232588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.232601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.232794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.232808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.233111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.233124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.233387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.233404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.233594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.233607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.233801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.233815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.234008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.234021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.234216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.234233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.234369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.234382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.234587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.234601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.234750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.234764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.234890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.234904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.235099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.235113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.235294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.235308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.235458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.235472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.235695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.235709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.235971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.235984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.236270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.236285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.236497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.236511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.236586] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.511 [2024-07-15 22:43:23.236712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.236726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.236880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.236893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.237037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.237050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.237289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.237303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.237439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.237453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.237594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.237606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.237747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.237760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.238088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.238102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.238258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.238272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.238475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.238488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.238765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.238778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.239092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.239106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.239381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.239395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.239554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.239568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.239772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.239785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.240061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.511 [2024-07-15 22:43:23.240074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.511 qpair failed and we were unable to recover it. 00:26:59.511 [2024-07-15 22:43:23.240306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.240321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.240461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.240474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.240672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.240686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.240919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.240932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.241190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.241203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.241353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.241367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.241560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.241573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.241774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.241787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4290000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.241990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.242013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19bded0 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.242236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.242248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.242381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.242391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.242527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.242537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.242674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.242684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.242925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.242935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.243175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.243184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.243373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.243384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.243625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.243636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.243878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.243888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.244139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.244149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.244390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.244400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.244596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.244605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.244848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.244859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.245037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.245046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.245234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.245245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.512 [2024-07-15 22:43:23.245442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.245452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.245587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.245598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:59.512 [2024-07-15 22:43:23.245792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.245802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.512 [2024-07-15 22:43:23.246048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.246059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.246191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.246201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.512 [2024-07-15 22:43:23.246391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.246402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.246591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.246601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.246730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.246740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.246945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.246954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.247163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.247172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.247362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.247373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.247615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.247626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.247867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.247877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.248135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.248144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.248407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.248417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.248663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.248673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.248812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.248822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.248939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.248949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.249175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.249184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.249389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.249399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.249587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.249597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.249816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.249826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.250010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.250022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.250180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.250190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.250448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.250458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.250631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.250640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.250773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.250783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.250952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.250962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.251155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.251165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.251319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.251330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.251477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.251487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.251608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.251617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.251747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.251757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.252038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.252047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.252232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.252242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.252430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.252440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.252636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.252645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.252776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.252785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.252998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.253009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.253203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.253214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.253407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.253416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.253611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.253620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.253757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.253766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.253969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.253978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.254163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.254173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.254306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.254316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.254564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.254574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.254793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.254803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.255097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.255108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.255350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.255361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.255560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.255570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.255753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.512 [2024-07-15 22:43:23.255763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.512 qpair failed and we were unable to recover it. 00:26:59.512 [2024-07-15 22:43:23.255954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.255964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.256243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.256253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.256452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.256462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.256713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.256722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.256927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.256937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.257130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.257139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.513 [2024-07-15 22:43:23.257409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.257419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.257615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.257625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:59.513 [2024-07-15 22:43:23.257760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.257770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.513 [2024-07-15 22:43:23.258042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.258053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.513 [2024-07-15 22:43:23.258243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.258254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.258449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.258459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.258651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.258662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.258800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.258810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.258930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.258940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.259137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.259147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.259393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.259403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.259585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.259595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.259788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.259797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.260094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.260104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.260352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.260362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.260536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.260546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.260739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.260749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.260950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.260960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.261228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.261238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.261384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.261393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.261637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.261647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.261764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.261774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.262091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.262100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.262390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.262402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.262550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.262560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.262694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.262705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.262898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.262907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.263044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.263054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.263331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.263341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.263474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.263487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.263750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.263760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.264063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.264072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.264268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.264279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.264413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.264423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.264614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.264624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.264816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.264826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.264960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.264969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.265090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.265100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.265306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.265316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.513 [2024-07-15 22:43:23.265460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.265470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.265669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.265679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.513 [2024-07-15 22:43:23.265888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.265898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.513 [2024-07-15 22:43:23.266153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.266164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.513 [2024-07-15 22:43:23.266302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.266312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.266501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.266511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.266629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.266640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.266829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.266839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.267048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.267057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.267358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.267368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.267558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.267568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.267755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.267766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.268055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.268064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.268282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.268291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.268511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.268521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.268800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:59.513 [2024-07-15 22:43:23.268812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4288000b90 with addr=10.0.0.2, port=4420 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.268823] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:59.513 [2024-07-15 22:43:23.277217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.513 [2024-07-15 22:43:23.277304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.513 [2024-07-15 22:43:23.277324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.513 [2024-07-15 22:43:23.277331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.513 [2024-07-15 22:43:23.277338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.513 [2024-07-15 22:43:23.277358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.513 22:43:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 160509 00:26:59.513 [2024-07-15 22:43:23.287149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.513 [2024-07-15 22:43:23.287310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.513 [2024-07-15 22:43:23.287327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.513 [2024-07-15 22:43:23.287334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.513 [2024-07-15 22:43:23.287340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.513 [2024-07-15 22:43:23.287357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.297169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.513 [2024-07-15 22:43:23.297246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.513 [2024-07-15 22:43:23.297263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.513 [2024-07-15 22:43:23.297271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.513 [2024-07-15 22:43:23.297277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.513 [2024-07-15 22:43:23.297293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.513 qpair failed and we were unable to recover it. 00:26:59.513 [2024-07-15 22:43:23.307108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.513 [2024-07-15 22:43:23.307184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.513 [2024-07-15 22:43:23.307204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.513 [2024-07-15 22:43:23.307210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.513 [2024-07-15 22:43:23.307216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.307238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.317126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.317198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.317214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.317221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.317231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.317246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.327189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.327263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.327280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.327286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.327292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.327307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.337191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.337318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.337334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.337340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.337346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.337361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.347219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.347294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.347311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.347318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.347327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.347343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.357234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.357306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.357322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.357329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.357335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.357350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.367280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.367345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.367360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.367367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.367377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.367391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.377317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.377384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.377400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.377407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.377413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.377428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.387328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.387397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.387416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.387422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.387428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.387443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.397396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.397472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.397489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.397496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.397502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.397517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.407425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.407491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.407508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.407515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.407521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.407536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.417438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.417503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.417519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.417526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.417547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.417562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.427435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.427509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.427526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.427532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.427538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.427553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.437489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.437557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.437576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.437583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.437592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.437607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.447559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.447622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.447638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.447645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.447652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.447667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.514 [2024-07-15 22:43:23.457532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.514 [2024-07-15 22:43:23.457606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.514 [2024-07-15 22:43:23.457623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.514 [2024-07-15 22:43:23.457630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.514 [2024-07-15 22:43:23.457636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.514 [2024-07-15 22:43:23.457651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.514 qpair failed and we were unable to recover it. 00:26:59.774 [2024-07-15 22:43:23.467560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.774 [2024-07-15 22:43:23.467634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.774 [2024-07-15 22:43:23.467649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.774 [2024-07-15 22:43:23.467660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.774 [2024-07-15 22:43:23.467666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.774 [2024-07-15 22:43:23.467681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.774 qpair failed and we were unable to recover it. 00:26:59.774 [2024-07-15 22:43:23.477535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.774 [2024-07-15 22:43:23.477601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.774 [2024-07-15 22:43:23.477616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.774 [2024-07-15 22:43:23.477623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.774 [2024-07-15 22:43:23.477629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.774 [2024-07-15 22:43:23.477643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.774 qpair failed and we were unable to recover it. 00:26:59.774 [2024-07-15 22:43:23.487771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.774 [2024-07-15 22:43:23.487852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.774 [2024-07-15 22:43:23.487866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.774 [2024-07-15 22:43:23.487872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.774 [2024-07-15 22:43:23.487878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.774 [2024-07-15 22:43:23.487893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.774 qpair failed and we were unable to recover it. 00:26:59.774 [2024-07-15 22:43:23.497700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.774 [2024-07-15 22:43:23.497771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.774 [2024-07-15 22:43:23.497788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.774 [2024-07-15 22:43:23.497795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.774 [2024-07-15 22:43:23.497801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.774 [2024-07-15 22:43:23.497815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.774 qpair failed and we were unable to recover it. 00:26:59.774 [2024-07-15 22:43:23.507762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.774 [2024-07-15 22:43:23.507832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.774 [2024-07-15 22:43:23.507847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.774 [2024-07-15 22:43:23.507854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.774 [2024-07-15 22:43:23.507860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.774 [2024-07-15 22:43:23.507875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.774 qpair failed and we were unable to recover it. 00:26:59.774 [2024-07-15 22:43:23.517759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.774 [2024-07-15 22:43:23.517844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.774 [2024-07-15 22:43:23.517860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.774 [2024-07-15 22:43:23.517867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.774 [2024-07-15 22:43:23.517873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.774 [2024-07-15 22:43:23.517888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.774 qpair failed and we were unable to recover it. 00:26:59.774 [2024-07-15 22:43:23.527739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.774 [2024-07-15 22:43:23.527806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.774 [2024-07-15 22:43:23.527821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.774 [2024-07-15 22:43:23.527832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.774 [2024-07-15 22:43:23.527837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.774 [2024-07-15 22:43:23.527853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.774 qpair failed and we were unable to recover it. 00:26:59.774 [2024-07-15 22:43:23.537768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.774 [2024-07-15 22:43:23.537839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.774 [2024-07-15 22:43:23.537854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.774 [2024-07-15 22:43:23.537861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.774 [2024-07-15 22:43:23.537870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.774 [2024-07-15 22:43:23.537884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.774 qpair failed and we were unable to recover it. 00:26:59.774 [2024-07-15 22:43:23.547792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.774 [2024-07-15 22:43:23.547863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.774 [2024-07-15 22:43:23.547878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.774 [2024-07-15 22:43:23.547885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.774 [2024-07-15 22:43:23.547891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.774 [2024-07-15 22:43:23.547906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.557846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.557916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.557934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.557941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.557947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.557961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.567905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.567978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.567993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.568000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.568006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.568021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.577893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.577957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.577973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.577980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.577986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.578005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.587894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.587960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.587975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.587981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.587987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.588002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.597911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.597978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.597993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.598000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.598006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.598024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.607953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.608016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.608031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.608038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.608044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.608060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.617983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.618047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.618068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.618075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.618081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.618096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.628002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.628088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.628103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.628110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.628116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.628131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.638049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.638118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.638133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.638139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.638149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.638163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.648050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.648117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.648135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.648142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.648148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.648163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.658102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.658164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.658179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.658186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.658192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.658209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.668113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.668182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.668200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.668207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.668213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.668234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.678160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.678240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.678256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.678263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.678269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.678284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.688189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.775 [2024-07-15 22:43:23.688270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.775 [2024-07-15 22:43:23.688285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.775 [2024-07-15 22:43:23.688292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.775 [2024-07-15 22:43:23.688298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.775 [2024-07-15 22:43:23.688313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.775 qpair failed and we were unable to recover it. 00:26:59.775 [2024-07-15 22:43:23.698221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.776 [2024-07-15 22:43:23.698297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.776 [2024-07-15 22:43:23.698314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.776 [2024-07-15 22:43:23.698321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.776 [2024-07-15 22:43:23.698326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.776 [2024-07-15 22:43:23.698342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.776 qpair failed and we were unable to recover it. 00:26:59.776 [2024-07-15 22:43:23.708230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.776 [2024-07-15 22:43:23.708298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.776 [2024-07-15 22:43:23.708317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.776 [2024-07-15 22:43:23.708324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.776 [2024-07-15 22:43:23.708330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.776 [2024-07-15 22:43:23.708345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.776 qpair failed and we were unable to recover it. 00:26:59.776 [2024-07-15 22:43:23.718266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.776 [2024-07-15 22:43:23.718334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.776 [2024-07-15 22:43:23.718349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.776 [2024-07-15 22:43:23.718356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.776 [2024-07-15 22:43:23.718365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.776 [2024-07-15 22:43:23.718380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.776 qpair failed and we were unable to recover it. 00:26:59.776 [2024-07-15 22:43:23.728349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.776 [2024-07-15 22:43:23.728450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.776 [2024-07-15 22:43:23.728466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.776 [2024-07-15 22:43:23.728473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.776 [2024-07-15 22:43:23.728479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.776 [2024-07-15 22:43:23.728494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.776 qpair failed and we were unable to recover it. 00:26:59.776 [2024-07-15 22:43:23.738339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.776 [2024-07-15 22:43:23.738421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.776 [2024-07-15 22:43:23.738437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.776 [2024-07-15 22:43:23.738444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.776 [2024-07-15 22:43:23.738450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:26:59.776 [2024-07-15 22:43:23.738464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:59.776 qpair failed and we were unable to recover it. 00:27:00.036 [2024-07-15 22:43:23.748351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.036 [2024-07-15 22:43:23.748425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.036 [2024-07-15 22:43:23.748444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.036 [2024-07-15 22:43:23.748451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.036 [2024-07-15 22:43:23.748465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.036 [2024-07-15 22:43:23.748479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.036 qpair failed and we were unable to recover it. 00:27:00.036 [2024-07-15 22:43:23.758383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.036 [2024-07-15 22:43:23.758459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.036 [2024-07-15 22:43:23.758475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.036 [2024-07-15 22:43:23.758481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.036 [2024-07-15 22:43:23.758489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.036 [2024-07-15 22:43:23.758504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.036 qpair failed and we were unable to recover it. 00:27:00.036 [2024-07-15 22:43:23.768433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.036 [2024-07-15 22:43:23.768512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.036 [2024-07-15 22:43:23.768528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.036 [2024-07-15 22:43:23.768535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.036 [2024-07-15 22:43:23.768541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.036 [2024-07-15 22:43:23.768556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.036 qpair failed and we were unable to recover it. 00:27:00.036 [2024-07-15 22:43:23.778438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.036 [2024-07-15 22:43:23.778507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.036 [2024-07-15 22:43:23.778523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.036 [2024-07-15 22:43:23.778530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.036 [2024-07-15 22:43:23.778536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.036 [2024-07-15 22:43:23.778551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.036 qpair failed and we were unable to recover it. 00:27:00.036 [2024-07-15 22:43:23.788463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.036 [2024-07-15 22:43:23.788534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.036 [2024-07-15 22:43:23.788549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.036 [2024-07-15 22:43:23.788556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.036 [2024-07-15 22:43:23.788562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.036 [2024-07-15 22:43:23.788577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.036 qpair failed and we were unable to recover it. 00:27:00.036 [2024-07-15 22:43:23.798538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.798648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.798664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.798671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.798677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.798692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.808535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.808614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.808630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.808637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.808643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.808657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.818556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.818633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.818650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.818657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.818662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.818677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.828553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.828626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.828642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.828649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.828656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.828671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.838631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.838708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.838724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.838731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.838740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.838756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.848635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.848706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.848722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.848729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.848735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.848749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.858605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.858673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.858689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.858696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.858702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.858716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.868690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.868757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.868772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.868779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.868786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.868804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.878739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.878801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.878816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.878823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.878829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.878844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.888686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.888751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.888770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.888776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.888782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.888797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.898791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.898858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.898874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.898881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.898887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.898902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.908803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.908870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.908885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.908892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.908898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.908912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.918844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.918921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.918937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.918943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.918949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.918965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.928875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.928938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.928954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.037 [2024-07-15 22:43:23.928964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.037 [2024-07-15 22:43:23.928970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.037 [2024-07-15 22:43:23.928988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.037 qpair failed and we were unable to recover it. 00:27:00.037 [2024-07-15 22:43:23.938911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.037 [2024-07-15 22:43:23.938986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.037 [2024-07-15 22:43:23.939002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.038 [2024-07-15 22:43:23.939009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.038 [2024-07-15 22:43:23.939015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.038 [2024-07-15 22:43:23.939031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.038 qpair failed and we were unable to recover it. 00:27:00.038 [2024-07-15 22:43:23.948951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.038 [2024-07-15 22:43:23.949023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.038 [2024-07-15 22:43:23.949039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.038 [2024-07-15 22:43:23.949045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.038 [2024-07-15 22:43:23.949051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.038 [2024-07-15 22:43:23.949065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.038 qpair failed and we were unable to recover it. 00:27:00.038 [2024-07-15 22:43:23.959006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.038 [2024-07-15 22:43:23.959087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.038 [2024-07-15 22:43:23.959102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.038 [2024-07-15 22:43:23.959109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.038 [2024-07-15 22:43:23.959115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.038 [2024-07-15 22:43:23.959130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.038 qpair failed and we were unable to recover it. 00:27:00.038 [2024-07-15 22:43:23.968993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.038 [2024-07-15 22:43:23.969066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.038 [2024-07-15 22:43:23.969082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.038 [2024-07-15 22:43:23.969090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.038 [2024-07-15 22:43:23.969096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.038 [2024-07-15 22:43:23.969110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.038 qpair failed and we were unable to recover it. 00:27:00.038 [2024-07-15 22:43:23.979051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.038 [2024-07-15 22:43:23.979115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.038 [2024-07-15 22:43:23.979129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.038 [2024-07-15 22:43:23.979136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.038 [2024-07-15 22:43:23.979142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.038 [2024-07-15 22:43:23.979157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.038 qpair failed and we were unable to recover it. 00:27:00.038 [2024-07-15 22:43:23.989032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.038 [2024-07-15 22:43:23.989100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.038 [2024-07-15 22:43:23.989116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.038 [2024-07-15 22:43:23.989122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.038 [2024-07-15 22:43:23.989129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.038 [2024-07-15 22:43:23.989144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.038 qpair failed and we were unable to recover it. 00:27:00.038 [2024-07-15 22:43:23.999075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.038 [2024-07-15 22:43:23.999144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.038 [2024-07-15 22:43:23.999163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.038 [2024-07-15 22:43:23.999170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.038 [2024-07-15 22:43:23.999176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.038 [2024-07-15 22:43:23.999191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.038 qpair failed and we were unable to recover it. 00:27:00.298 [2024-07-15 22:43:24.009057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.298 [2024-07-15 22:43:24.009134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.298 [2024-07-15 22:43:24.009150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.298 [2024-07-15 22:43:24.009157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.298 [2024-07-15 22:43:24.009164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.298 [2024-07-15 22:43:24.009178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.298 qpair failed and we were unable to recover it. 00:27:00.298 [2024-07-15 22:43:24.019075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.298 [2024-07-15 22:43:24.019140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.298 [2024-07-15 22:43:24.019159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.298 [2024-07-15 22:43:24.019166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.298 [2024-07-15 22:43:24.019172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.298 [2024-07-15 22:43:24.019187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.298 qpair failed and we were unable to recover it. 00:27:00.298 [2024-07-15 22:43:24.029149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.298 [2024-07-15 22:43:24.029216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.298 [2024-07-15 22:43:24.029236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.298 [2024-07-15 22:43:24.029242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.298 [2024-07-15 22:43:24.029248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.298 [2024-07-15 22:43:24.029262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.298 qpair failed and we were unable to recover it. 00:27:00.298 [2024-07-15 22:43:24.039107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.298 [2024-07-15 22:43:24.039173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.298 [2024-07-15 22:43:24.039188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.298 [2024-07-15 22:43:24.039194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.298 [2024-07-15 22:43:24.039200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.039215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.049201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.049288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.049304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.049310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.049316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.049330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.059276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.059343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.059358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.059365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.059371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.059391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.069290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.069356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.069372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.069379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.069389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.069405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.079288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.079360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.079376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.079383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.079389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.079404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.089341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.089422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.089438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.089445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.089452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.089466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.099419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.099529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.099544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.099551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.099557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.099572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.109391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.109457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.109479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.109486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.109492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.109506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.119454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.119523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.119538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.119544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.119550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.119568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.129464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.129534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.129549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.129556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.129562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.129581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.139398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.139467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.139483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.139490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.139496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.139510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.149510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.149637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.149653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.149660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.149666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.149685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.159540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.159620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.159636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.159643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.159649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.159664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.169569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.169637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.169652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.169659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.169665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.169679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.179595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.299 [2024-07-15 22:43:24.179665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.299 [2024-07-15 22:43:24.179681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.299 [2024-07-15 22:43:24.179687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.299 [2024-07-15 22:43:24.179694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.299 [2024-07-15 22:43:24.179708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.299 qpair failed and we were unable to recover it. 00:27:00.299 [2024-07-15 22:43:24.189661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.300 [2024-07-15 22:43:24.189743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.300 [2024-07-15 22:43:24.189760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.300 [2024-07-15 22:43:24.189766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.300 [2024-07-15 22:43:24.189772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.300 [2024-07-15 22:43:24.189787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.300 qpair failed and we were unable to recover it. 00:27:00.300 [2024-07-15 22:43:24.199674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.300 [2024-07-15 22:43:24.199796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.300 [2024-07-15 22:43:24.199813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.300 [2024-07-15 22:43:24.199820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.300 [2024-07-15 22:43:24.199826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.300 [2024-07-15 22:43:24.199843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.300 qpair failed and we were unable to recover it. 00:27:00.300 [2024-07-15 22:43:24.209723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.300 [2024-07-15 22:43:24.209791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.300 [2024-07-15 22:43:24.209807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.300 [2024-07-15 22:43:24.209814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.300 [2024-07-15 22:43:24.209819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.300 [2024-07-15 22:43:24.209834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.300 qpair failed and we were unable to recover it. 00:27:00.300 [2024-07-15 22:43:24.219711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.300 [2024-07-15 22:43:24.219777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.300 [2024-07-15 22:43:24.219792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.300 [2024-07-15 22:43:24.219798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.300 [2024-07-15 22:43:24.219809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.300 [2024-07-15 22:43:24.219822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.300 qpair failed and we were unable to recover it. 00:27:00.300 [2024-07-15 22:43:24.229741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.300 [2024-07-15 22:43:24.229857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.300 [2024-07-15 22:43:24.229874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.300 [2024-07-15 22:43:24.229882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.300 [2024-07-15 22:43:24.229888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.300 [2024-07-15 22:43:24.229903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.300 qpair failed and we were unable to recover it. 00:27:00.300 [2024-07-15 22:43:24.239761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.300 [2024-07-15 22:43:24.239837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.300 [2024-07-15 22:43:24.239853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.300 [2024-07-15 22:43:24.239860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.300 [2024-07-15 22:43:24.239869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.300 [2024-07-15 22:43:24.239884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.300 qpair failed and we were unable to recover it. 00:27:00.300 [2024-07-15 22:43:24.249751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.300 [2024-07-15 22:43:24.249824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.300 [2024-07-15 22:43:24.249839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.300 [2024-07-15 22:43:24.249846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.300 [2024-07-15 22:43:24.249852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.300 [2024-07-15 22:43:24.249867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.300 qpair failed and we were unable to recover it. 00:27:00.300 [2024-07-15 22:43:24.259823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.300 [2024-07-15 22:43:24.259891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.300 [2024-07-15 22:43:24.259907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.300 [2024-07-15 22:43:24.259913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.300 [2024-07-15 22:43:24.259919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.300 [2024-07-15 22:43:24.259934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.300 qpair failed and we were unable to recover it. 00:27:00.558 [2024-07-15 22:43:24.269855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.558 [2024-07-15 22:43:24.269922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.558 [2024-07-15 22:43:24.269937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.558 [2024-07-15 22:43:24.269944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.558 [2024-07-15 22:43:24.269954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.558 [2024-07-15 22:43:24.269969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.558 qpair failed and we were unable to recover it. 00:27:00.558 [2024-07-15 22:43:24.279891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.558 [2024-07-15 22:43:24.279961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.558 [2024-07-15 22:43:24.279976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.558 [2024-07-15 22:43:24.279983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.558 [2024-07-15 22:43:24.279989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4288000b90 00:27:00.558 [2024-07-15 22:43:24.280003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.558 qpair failed and we were unable to recover it. 00:27:00.558 Controller properly reset. 00:27:04.741 Initializing NVMe Controllers 00:27:04.741 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:04.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:04.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:04.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:04.741 Initialization complete. Launching workers. 00:27:04.741 Starting thread on core 1 00:27:04.741 Starting thread on core 2 00:27:04.741 Starting thread on core 3 00:27:04.741 Starting thread on core 0 00:27:04.741 22:43:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:04.741 00:27:04.741 real 0m11.169s 00:27:04.742 user 0m32.758s 00:27:04.742 sys 0m4.694s 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.742 ************************************ 00:27:04.742 END TEST nvmf_target_disconnect_tc2 00:27:04.742 ************************************ 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:04.742 rmmod nvme_tcp 00:27:04.742 rmmod nvme_fabrics 00:27:04.742 rmmod nvme_keyring 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 161198 ']' 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 161198 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 161198 ']' 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 161198 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 161198 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 161198' 00:27:04.742 killing process with pid 161198 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 161198 00:27:04.742 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 161198 00:27:05.000 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:05.000 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:05.000 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:05.000 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.000 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.000 22:43:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.000 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.000 22:43:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.534 22:43:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:07.534 00:27:07.534 real 0m19.576s 00:27:07.534 user 0m59.364s 00:27:07.534 sys 0m9.514s 00:27:07.534 22:43:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:07.534 22:43:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:07.534 ************************************ 00:27:07.534 END TEST nvmf_target_disconnect 00:27:07.534 ************************************ 00:27:07.534 22:43:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:07.534 22:43:30 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:07.534 22:43:30 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:07.534 22:43:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.534 22:43:30 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:07.534 00:27:07.534 real 20m50.810s 00:27:07.534 user 45m10.716s 00:27:07.534 sys 6m18.860s 00:27:07.534 22:43:30 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:07.534 22:43:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.534 ************************************ 00:27:07.534 END TEST nvmf_tcp 00:27:07.534 ************************************ 00:27:07.534 22:43:31 -- common/autotest_common.sh@1142 -- # return 0 00:27:07.534 22:43:31 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:07.534 22:43:31 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:07.534 22:43:31 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:07.534 22:43:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.534 22:43:31 -- common/autotest_common.sh@10 -- # set +x 00:27:07.534 ************************************ 00:27:07.534 START TEST spdkcli_nvmf_tcp 00:27:07.534 ************************************ 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:07.534 * Looking for test storage... 00:27:07.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.534 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=162730 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 162730 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 162730 ']' 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:07.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.535 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.535 [2024-07-15 22:43:31.170253] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:27:07.535 [2024-07-15 22:43:31.170304] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162730 ] 00:27:07.535 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.535 [2024-07-15 22:43:31.223821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:07.535 [2024-07-15 22:43:31.304639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.535 [2024-07-15 22:43:31.304643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.102 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:08.102 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:27:08.102 22:43:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:08.102 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:08.102 22:43:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:08.102 22:43:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:08.102 22:43:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:08.102 22:43:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:08.102 22:43:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:08.102 22:43:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:08.102 22:43:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:08.102 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:08.102 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:08.102 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:08.102 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:08.102 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:08.102 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:08.102 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:08.102 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:08.102 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:08.102 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:08.102 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:08.102 ' 00:27:10.636 [2024-07-15 22:43:34.383639] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.013 [2024-07-15 22:43:35.559579] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:13.918 [2024-07-15 22:43:37.722258] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:15.823 [2024-07-15 22:43:39.580037] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:17.194 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:17.194 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:17.194 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:17.194 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:17.194 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:17.194 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:17.194 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:17.195 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:17.195 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:17.195 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:17.195 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:17.195 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:17.195 22:43:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:17.195 22:43:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.195 22:43:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.195 22:43:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:17.195 22:43:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:17.195 22:43:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.195 22:43:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:17.195 22:43:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:17.760 22:43:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:17.760 22:43:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:17.760 22:43:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:17.760 22:43:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.760 22:43:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.760 22:43:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:17.760 22:43:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:17.760 22:43:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.760 22:43:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:17.760 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:17.760 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:17.760 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:17.760 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:17.760 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:17.760 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:17.760 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:17.760 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:17.760 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:17.760 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:17.760 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:17.760 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:17.760 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:17.760 ' 00:27:23.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:23.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:23.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:23.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:23.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:23.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:23.059 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:23.059 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:23.059 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:23.059 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:23.059 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:23.059 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:23.059 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:23.059 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 162730 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 162730 ']' 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 162730 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162730 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162730' 00:27:23.059 killing process with pid 162730 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 162730 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 162730 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 162730 ']' 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 162730 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 162730 ']' 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 162730 00:27:23.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (162730) - No such process 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 162730 is not found' 00:27:23.059 Process with pid 162730 is not found 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:23.059 00:27:23.059 real 0m15.781s 00:27:23.059 user 0m32.762s 00:27:23.059 sys 0m0.689s 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:23.059 22:43:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:23.059 ************************************ 00:27:23.059 END TEST spdkcli_nvmf_tcp 00:27:23.060 ************************************ 00:27:23.060 22:43:46 -- common/autotest_common.sh@1142 -- # return 0 00:27:23.060 22:43:46 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:23.060 22:43:46 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:23.060 22:43:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:23.060 22:43:46 -- common/autotest_common.sh@10 -- # set +x 00:27:23.060 ************************************ 00:27:23.060 START TEST nvmf_identify_passthru 00:27:23.060 ************************************ 00:27:23.060 22:43:46 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:23.060 * Looking for test storage... 00:27:23.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:23.060 22:43:46 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.060 22:43:46 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.060 22:43:46 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.060 22:43:46 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.060 22:43:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.060 22:43:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.060 22:43:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.060 22:43:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:23.060 22:43:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:23.060 22:43:46 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:23.060 22:43:46 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:23.060 22:43:46 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.060 22:43:46 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.060 22:43:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.060 22:43:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.060 22:43:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.060 22:43:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:23.060 22:43:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.060 22:43:46 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.060 22:43:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:23.060 22:43:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:23.060 22:43:46 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:23.060 22:43:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:28.331 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.331 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.331 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.331 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.331 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.331 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.331 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.331 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.331 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.331 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:28.331 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:28.332 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:28.332 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:28.332 Found net devices under 0000:86:00.0: cvl_0_0 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:28.332 Found net devices under 0000:86:00.1: cvl_0_1 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:28.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:27:28.332 00:27:28.332 --- 10.0.0.2 ping statistics --- 00:27:28.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.332 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:27:28.332 00:27:28.332 --- 10.0.0.1 ping statistics --- 00:27:28.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.332 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:28.332 22:43:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:28.332 22:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:28.332 22:43:51 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:28.332 22:43:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:28.332 22:43:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:28.332 22:43:51 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:28.332 22:43:51 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:28.332 22:43:51 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:28.332 22:43:51 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:28.332 22:43:51 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:28.332 22:43:51 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:28.332 22:43:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:28.332 22:43:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:28.332 22:43:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:28.332 22:43:52 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:28.332 22:43:52 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:27:28.332 22:43:52 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:27:28.332 22:43:52 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:28.332 22:43:52 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:28.332 22:43:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:28.332 22:43:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:28.332 22:43:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:28.332 EAL: No free 2048 kB hugepages reported on node 1 00:27:32.521 22:43:56 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:27:32.521 22:43:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:32.521 22:43:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:32.521 22:43:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:32.521 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.707 22:44:00 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:36.707 22:44:00 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:36.707 22:44:00 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:36.707 22:44:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:36.707 22:44:00 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:36.707 22:44:00 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:36.707 22:44:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:36.707 22:44:00 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:36.707 22:44:00 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=169546 00:27:36.707 22:44:00 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:36.707 22:44:00 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 169546 00:27:36.707 22:44:00 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 169546 ']' 00:27:36.707 22:44:00 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.707 22:44:00 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:36.707 22:44:00 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.707 22:44:00 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:36.708 22:44:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:36.708 [2024-07-15 22:44:00.368681] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:27:36.708 [2024-07-15 22:44:00.368730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.708 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.708 [2024-07-15 22:44:00.425318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:36.708 [2024-07-15 22:44:00.505954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.708 [2024-07-15 22:44:00.505989] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.708 [2024-07-15 22:44:00.505997] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.708 [2024-07-15 22:44:00.506003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.708 [2024-07-15 22:44:00.506008] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.708 [2024-07-15 22:44:00.506050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.708 [2024-07-15 22:44:00.506145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.708 [2024-07-15 22:44:00.506359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.708 [2024-07-15 22:44:00.506363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.274 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:37.274 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:27:37.274 22:44:01 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:37.274 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.274 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.274 INFO: Log level set to 20 00:27:37.274 INFO: Requests: 00:27:37.274 { 00:27:37.274 "jsonrpc": "2.0", 00:27:37.274 "method": "nvmf_set_config", 00:27:37.274 "id": 1, 00:27:37.274 "params": { 00:27:37.274 "admin_cmd_passthru": { 00:27:37.274 "identify_ctrlr": true 00:27:37.274 } 00:27:37.274 } 00:27:37.274 } 00:27:37.274 00:27:37.274 INFO: response: 00:27:37.274 { 00:27:37.274 "jsonrpc": "2.0", 00:27:37.274 "id": 1, 00:27:37.274 "result": true 00:27:37.274 } 00:27:37.274 00:27:37.274 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.274 22:44:01 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:37.274 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.274 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.274 INFO: Setting log level to 20 00:27:37.274 INFO: Setting log level to 20 00:27:37.274 INFO: Log level set to 20 00:27:37.274 INFO: Log level set to 20 00:27:37.274 INFO: Requests: 00:27:37.274 { 00:27:37.274 "jsonrpc": "2.0", 00:27:37.274 "method": "framework_start_init", 00:27:37.274 "id": 1 00:27:37.274 } 00:27:37.274 00:27:37.274 INFO: Requests: 00:27:37.274 { 00:27:37.274 "jsonrpc": "2.0", 00:27:37.274 "method": "framework_start_init", 00:27:37.274 "id": 1 00:27:37.274 } 00:27:37.274 00:27:37.532 [2024-07-15 22:44:01.269127] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:37.532 INFO: response: 00:27:37.532 { 00:27:37.532 "jsonrpc": "2.0", 00:27:37.532 "id": 1, 00:27:37.532 "result": true 00:27:37.532 } 00:27:37.533 00:27:37.533 INFO: response: 00:27:37.533 { 00:27:37.533 "jsonrpc": "2.0", 00:27:37.533 "id": 1, 00:27:37.533 "result": true 00:27:37.533 } 00:27:37.533 00:27:37.533 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.533 22:44:01 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:37.533 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.533 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.533 INFO: Setting log level to 40 00:27:37.533 INFO: Setting log level to 40 00:27:37.533 INFO: Setting log level to 40 00:27:37.533 [2024-07-15 22:44:01.282638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.533 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.533 22:44:01 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:37.533 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:37.533 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:37.533 22:44:01 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:27:37.533 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.533 22:44:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:40.821 Nvme0n1 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:40.821 [2024-07-15 22:44:04.176512] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:40.821 [ 00:27:40.821 { 00:27:40.821 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:40.821 "subtype": "Discovery", 00:27:40.821 "listen_addresses": [], 00:27:40.821 "allow_any_host": true, 00:27:40.821 "hosts": [] 00:27:40.821 }, 00:27:40.821 { 00:27:40.821 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.821 "subtype": "NVMe", 00:27:40.821 "listen_addresses": [ 00:27:40.821 { 00:27:40.821 "trtype": "TCP", 00:27:40.821 "adrfam": "IPv4", 00:27:40.821 "traddr": "10.0.0.2", 00:27:40.821 "trsvcid": "4420" 00:27:40.821 } 00:27:40.821 ], 00:27:40.821 "allow_any_host": true, 00:27:40.821 "hosts": [], 00:27:40.821 "serial_number": "SPDK00000000000001", 00:27:40.821 "model_number": "SPDK bdev Controller", 00:27:40.821 "max_namespaces": 1, 00:27:40.821 "min_cntlid": 1, 00:27:40.821 "max_cntlid": 65519, 00:27:40.821 "namespaces": [ 00:27:40.821 { 00:27:40.821 "nsid": 1, 00:27:40.821 "bdev_name": "Nvme0n1", 00:27:40.821 "name": "Nvme0n1", 00:27:40.821 "nguid": "5E7264645CAD41C5952B4299023D57F6", 00:27:40.821 "uuid": "5e726464-5cad-41c5-952b-4299023d57f6" 00:27:40.821 } 00:27:40.821 ] 00:27:40.821 } 00:27:40.821 ] 00:27:40.821 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:40.821 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:40.821 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:27:40.821 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:40.822 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.822 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:40.822 22:44:04 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:40.822 22:44:04 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:40.822 22:44:04 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:40.822 22:44:04 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:40.822 22:44:04 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:40.822 22:44:04 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:40.822 22:44:04 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:40.822 rmmod nvme_tcp 00:27:40.822 rmmod nvme_fabrics 00:27:40.822 rmmod nvme_keyring 00:27:40.822 22:44:04 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:40.822 22:44:04 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:40.822 22:44:04 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:40.822 22:44:04 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 169546 ']' 00:27:40.822 22:44:04 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 169546 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 169546 ']' 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 169546 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 169546 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 169546' 00:27:40.822 killing process with pid 169546 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 169546 00:27:40.822 22:44:04 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 169546 00:27:42.195 22:44:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:42.195 22:44:06 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:42.195 22:44:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:42.195 22:44:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.195 22:44:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:42.195 22:44:06 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.196 22:44:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:42.196 22:44:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.729 22:44:08 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.729 00:27:44.729 real 0m21.325s 00:27:44.729 user 0m29.718s 00:27:44.729 sys 0m4.568s 00:27:44.729 22:44:08 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:44.729 22:44:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:44.729 ************************************ 00:27:44.729 END TEST nvmf_identify_passthru 00:27:44.729 ************************************ 00:27:44.729 22:44:08 -- common/autotest_common.sh@1142 -- # return 0 00:27:44.729 22:44:08 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:44.729 22:44:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:44.729 22:44:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.729 22:44:08 -- common/autotest_common.sh@10 -- # set +x 00:27:44.729 ************************************ 00:27:44.729 START TEST nvmf_dif 00:27:44.729 ************************************ 00:27:44.729 22:44:08 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:44.729 * Looking for test storage... 00:27:44.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:44.729 22:44:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.729 22:44:08 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.729 22:44:08 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.729 22:44:08 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.729 22:44:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.729 22:44:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.729 22:44:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.729 22:44:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:44.729 22:44:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.729 22:44:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:44.729 22:44:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:44.729 22:44:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:44.729 22:44:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:44.729 22:44:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.729 22:44:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:44.729 22:44:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.729 22:44:08 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.729 22:44:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:50.050 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:50.050 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:50.050 Found net devices under 0000:86:00.0: cvl_0_0 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:50.050 Found net devices under 0000:86:00.1: cvl_0_1 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.050 22:44:13 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:50.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:27:50.051 00:27:50.051 --- 10.0.0.2 ping statistics --- 00:27:50.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.051 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:27:50.051 00:27:50.051 --- 10.0.0.1 ping statistics --- 00:27:50.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.051 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:50.051 22:44:13 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:53.340 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:53.340 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:53.340 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:53.340 22:44:16 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.340 22:44:16 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:53.340 22:44:16 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:53.340 22:44:16 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.340 22:44:16 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:53.340 22:44:16 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:53.340 22:44:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:53.340 22:44:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:53.340 22:44:16 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:53.340 22:44:16 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:53.340 22:44:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:53.340 22:44:16 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=175204 00:27:53.340 22:44:16 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:53.340 22:44:16 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 175204 00:27:53.340 22:44:16 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 175204 ']' 00:27:53.340 22:44:16 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.340 22:44:16 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.340 22:44:16 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.341 22:44:16 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.341 22:44:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:53.341 [2024-07-15 22:44:16.878810] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:27:53.341 [2024-07-15 22:44:16.878850] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.341 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.341 [2024-07-15 22:44:16.936514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.341 [2024-07-15 22:44:17.008315] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.341 [2024-07-15 22:44:17.008356] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.341 [2024-07-15 22:44:17.008362] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.341 [2024-07-15 22:44:17.008368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.341 [2024-07-15 22:44:17.008373] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.341 [2024-07-15 22:44:17.008392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.909 22:44:17 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:53.909 22:44:17 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:27:53.909 22:44:17 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:53.909 22:44:17 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:53.909 22:44:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 22:44:17 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.909 22:44:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:53.909 22:44:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:53.909 22:44:17 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.909 22:44:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 [2024-07-15 22:44:17.715773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.909 22:44:17 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.909 22:44:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:53.909 22:44:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:53.909 22:44:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:53.909 22:44:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 ************************************ 00:27:53.909 START TEST fio_dif_1_default 00:27:53.909 ************************************ 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 bdev_null0 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:53.909 [2024-07-15 22:44:17.780051] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.909 { 00:27:53.909 "params": { 00:27:53.909 "name": "Nvme$subsystem", 00:27:53.909 "trtype": "$TEST_TRANSPORT", 00:27:53.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.909 "adrfam": "ipv4", 00:27:53.909 "trsvcid": "$NVMF_PORT", 00:27:53.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.909 "hdgst": ${hdgst:-false}, 00:27:53.909 "ddgst": ${ddgst:-false} 00:27:53.909 }, 00:27:53.909 "method": "bdev_nvme_attach_controller" 00:27:53.909 } 00:27:53.909 EOF 00:27:53.909 )") 00:27:53.909 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:53.910 "params": { 00:27:53.910 "name": "Nvme0", 00:27:53.910 "trtype": "tcp", 00:27:53.910 "traddr": "10.0.0.2", 00:27:53.910 "adrfam": "ipv4", 00:27:53.910 "trsvcid": "4420", 00:27:53.910 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:53.910 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:53.910 "hdgst": false, 00:27:53.910 "ddgst": false 00:27:53.910 }, 00:27:53.910 "method": "bdev_nvme_attach_controller" 00:27:53.910 }' 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:53.910 22:44:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:54.200 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:54.200 fio-3.35 00:27:54.200 Starting 1 thread 00:27:54.200 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.416 00:28:06.416 filename0: (groupid=0, jobs=1): err= 0: pid=175586: Mon Jul 15 22:44:28 2024 00:28:06.416 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:28:06.416 slat (nsec): min=5888, max=25543, avg=6266.39, stdev=1306.74 00:28:06.416 clat (usec): min=40838, max=43849, avg=41014.94, stdev=233.47 00:28:06.416 lat (usec): min=40844, max=43875, avg=41021.20, stdev=233.85 00:28:06.416 clat percentiles (usec): 00:28:06.416 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:06.416 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:06.416 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:06.416 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:28:06.416 | 99.99th=[43779] 00:28:06.416 bw ( KiB/s): min= 384, max= 416, per=99.50%, avg=388.80, stdev=11.72, samples=20 00:28:06.416 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:28:06.416 lat (msec) : 50=100.00% 00:28:06.416 cpu : usr=94.74%, sys=5.01%, ctx=13, majf=0, minf=227 00:28:06.416 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:06.416 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.416 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.416 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.416 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:06.416 00:28:06.416 Run status group 0 (all jobs): 00:28:06.416 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10012-10012msec 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.416 00:28:06.416 real 0m11.007s 00:28:06.416 user 0m15.925s 00:28:06.416 sys 0m0.761s 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:06.416 ************************************ 00:28:06.416 END TEST fio_dif_1_default 00:28:06.416 ************************************ 00:28:06.416 22:44:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:06.416 22:44:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:06.416 22:44:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:06.416 22:44:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:06.416 22:44:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:06.416 ************************************ 00:28:06.416 START TEST fio_dif_1_multi_subsystems 00:28:06.416 ************************************ 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.416 bdev_null0 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.416 [2024-07-15 22:44:28.855683] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.416 bdev_null1 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.416 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.417 { 00:28:06.417 "params": { 00:28:06.417 "name": "Nvme$subsystem", 00:28:06.417 "trtype": "$TEST_TRANSPORT", 00:28:06.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.417 "adrfam": "ipv4", 00:28:06.417 "trsvcid": "$NVMF_PORT", 00:28:06.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.417 "hdgst": ${hdgst:-false}, 00:28:06.417 "ddgst": ${ddgst:-false} 00:28:06.417 }, 00:28:06.417 "method": "bdev_nvme_attach_controller" 00:28:06.417 } 00:28:06.417 EOF 00:28:06.417 )") 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:06.417 { 00:28:06.417 "params": { 00:28:06.417 "name": "Nvme$subsystem", 00:28:06.417 "trtype": "$TEST_TRANSPORT", 00:28:06.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:06.417 "adrfam": "ipv4", 00:28:06.417 "trsvcid": "$NVMF_PORT", 00:28:06.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:06.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:06.417 "hdgst": ${hdgst:-false}, 00:28:06.417 "ddgst": ${ddgst:-false} 00:28:06.417 }, 00:28:06.417 "method": "bdev_nvme_attach_controller" 00:28:06.417 } 00:28:06.417 EOF 00:28:06.417 )") 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:06.417 "params": { 00:28:06.417 "name": "Nvme0", 00:28:06.417 "trtype": "tcp", 00:28:06.417 "traddr": "10.0.0.2", 00:28:06.417 "adrfam": "ipv4", 00:28:06.417 "trsvcid": "4420", 00:28:06.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:06.417 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:06.417 "hdgst": false, 00:28:06.417 "ddgst": false 00:28:06.417 }, 00:28:06.417 "method": "bdev_nvme_attach_controller" 00:28:06.417 },{ 00:28:06.417 "params": { 00:28:06.417 "name": "Nvme1", 00:28:06.417 "trtype": "tcp", 00:28:06.417 "traddr": "10.0.0.2", 00:28:06.417 "adrfam": "ipv4", 00:28:06.417 "trsvcid": "4420", 00:28:06.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:06.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:06.417 "hdgst": false, 00:28:06.417 "ddgst": false 00:28:06.417 }, 00:28:06.417 "method": "bdev_nvme_attach_controller" 00:28:06.417 }' 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:06.417 22:44:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:06.417 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:06.417 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:06.417 fio-3.35 00:28:06.417 Starting 2 threads 00:28:06.417 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.404 00:28:16.404 filename0: (groupid=0, jobs=1): err= 0: pid=177552: Mon Jul 15 22:44:40 2024 00:28:16.404 read: IOPS=190, BW=762KiB/s (780kB/s)(7616KiB/10001msec) 00:28:16.404 slat (nsec): min=6001, max=34722, avg=7235.01, stdev=2205.96 00:28:16.404 clat (usec): min=644, max=42634, avg=20988.74, stdev=20258.46 00:28:16.404 lat (usec): min=651, max=42669, avg=20995.98, stdev=20257.79 00:28:16.404 clat percentiles (usec): 00:28:16.404 | 1.00th=[ 660], 5.00th=[ 660], 10.00th=[ 668], 20.00th=[ 676], 00:28:16.404 | 30.00th=[ 734], 40.00th=[ 807], 50.00th=[ 1057], 60.00th=[41157], 00:28:16.404 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:16.404 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:28:16.404 | 99.99th=[42730] 00:28:16.404 bw ( KiB/s): min= 704, max= 768, per=50.08%, avg=761.26, stdev=20.18, samples=19 00:28:16.404 iops : min= 176, max= 192, avg=190.32, stdev= 5.04, samples=19 00:28:16.404 lat (usec) : 750=32.04%, 1000=17.33% 00:28:16.404 lat (msec) : 2=0.63%, 50=50.00% 00:28:16.404 cpu : usr=97.47%, sys=2.28%, ctx=10, majf=0, minf=174 00:28:16.404 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.404 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.404 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:16.404 filename1: (groupid=0, jobs=1): err= 0: pid=177553: Mon Jul 15 22:44:40 2024 00:28:16.404 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10003msec) 00:28:16.404 slat (nsec): min=6004, max=49580, avg=7229.03, stdev=2327.29 00:28:16.404 clat (usec): min=655, max=42131, avg=21082.19, stdev=20227.68 00:28:16.404 lat (usec): min=661, max=42138, avg=21089.42, stdev=20227.06 00:28:16.404 clat percentiles (usec): 00:28:16.404 | 1.00th=[ 660], 5.00th=[ 668], 10.00th=[ 676], 20.00th=[ 799], 00:28:16.404 | 30.00th=[ 807], 40.00th=[ 816], 50.00th=[41157], 60.00th=[41157], 00:28:16.404 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:16.404 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:28:16.404 | 99.99th=[42206] 00:28:16.404 bw ( KiB/s): min= 672, max= 768, per=49.95%, avg=759.58, stdev=25.78, samples=19 00:28:16.404 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:28:16.404 lat (usec) : 750=15.66%, 1000=33.70% 00:28:16.404 lat (msec) : 2=0.42%, 50=50.21% 00:28:16.404 cpu : usr=97.45%, sys=2.30%, ctx=13, majf=0, minf=89 00:28:16.404 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:16.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:16.404 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:16.404 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:16.404 00:28:16.404 Run status group 0 (all jobs): 00:28:16.404 READ: bw=1520KiB/s (1556kB/s), 758KiB/s-762KiB/s (776kB/s-780kB/s), io=14.8MiB (15.6MB), run=10001-10003msec 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.404 00:28:16.404 real 0m11.387s 00:28:16.404 user 0m25.985s 00:28:16.404 sys 0m0.809s 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:16.404 22:44:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:16.404 ************************************ 00:28:16.404 END TEST fio_dif_1_multi_subsystems 00:28:16.404 ************************************ 00:28:16.404 22:44:40 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:16.404 22:44:40 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:16.404 22:44:40 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:16.404 22:44:40 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.404 22:44:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:16.404 ************************************ 00:28:16.404 START TEST fio_dif_rand_params 00:28:16.404 ************************************ 00:28:16.404 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:16.404 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:16.404 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:16.404 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:16.404 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:16.404 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:16.405 bdev_null0 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:16.405 [2024-07-15 22:44:40.305990] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:16.405 { 00:28:16.405 "params": { 00:28:16.405 "name": "Nvme$subsystem", 00:28:16.405 "trtype": "$TEST_TRANSPORT", 00:28:16.405 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:16.405 "adrfam": "ipv4", 00:28:16.405 "trsvcid": "$NVMF_PORT", 00:28:16.405 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:16.405 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:16.405 "hdgst": ${hdgst:-false}, 00:28:16.405 "ddgst": ${ddgst:-false} 00:28:16.405 }, 00:28:16.405 "method": "bdev_nvme_attach_controller" 00:28:16.405 } 00:28:16.405 EOF 00:28:16.405 )") 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:16.405 "params": { 00:28:16.405 "name": "Nvme0", 00:28:16.405 "trtype": "tcp", 00:28:16.405 "traddr": "10.0.0.2", 00:28:16.405 "adrfam": "ipv4", 00:28:16.405 "trsvcid": "4420", 00:28:16.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:16.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:16.405 "hdgst": false, 00:28:16.405 "ddgst": false 00:28:16.405 }, 00:28:16.405 "method": "bdev_nvme_attach_controller" 00:28:16.405 }' 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:16.405 22:44:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:16.971 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:16.971 ... 00:28:16.971 fio-3.35 00:28:16.971 Starting 3 threads 00:28:16.971 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.530 00:28:23.530 filename0: (groupid=0, jobs=1): err= 0: pid=179514: Mon Jul 15 22:44:46 2024 00:28:23.530 read: IOPS=251, BW=31.4MiB/s (32.9MB/s)(158MiB/5039msec) 00:28:23.530 slat (nsec): min=6246, max=95051, avg=11629.77, stdev=3610.88 00:28:23.530 clat (usec): min=3869, max=91301, avg=11934.47, stdev=14041.59 00:28:23.530 lat (usec): min=3878, max=91315, avg=11946.10, stdev=14041.56 00:28:23.530 clat percentiles (usec): 00:28:23.530 | 1.00th=[ 4293], 5.00th=[ 4555], 10.00th=[ 4948], 20.00th=[ 5669], 00:28:23.530 | 30.00th=[ 6128], 40.00th=[ 6390], 50.00th=[ 6718], 60.00th=[ 7373], 00:28:23.530 | 70.00th=[ 8029], 80.00th=[ 8717], 90.00th=[46924], 95.00th=[48497], 00:28:23.530 | 99.00th=[50070], 99.50th=[50594], 99.90th=[90702], 99.95th=[91751], 00:28:23.530 | 99.99th=[91751] 00:28:23.530 bw ( KiB/s): min=23296, max=47616, per=31.82%, avg=32307.20, stdev=6912.32, samples=10 00:28:23.530 iops : min= 182, max= 372, avg=252.40, stdev=54.00, samples=10 00:28:23.530 lat (msec) : 4=0.32%, 10=86.96%, 20=0.32%, 50=11.46%, 100=0.95% 00:28:23.530 cpu : usr=95.97%, sys=3.39%, ctx=14, majf=0, minf=117 00:28:23.530 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.530 issued rwts: total=1265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.530 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:23.530 filename0: (groupid=0, jobs=1): err= 0: pid=179515: Mon Jul 15 22:44:46 2024 00:28:23.530 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(167MiB/5045msec) 00:28:23.530 slat (nsec): min=6224, max=32599, avg=9233.80, stdev=2632.26 00:28:23.530 clat (usec): min=3745, max=92694, avg=11301.32, stdev=13440.50 00:28:23.530 lat (usec): min=3751, max=92707, avg=11310.56, stdev=13440.80 00:28:23.530 clat percentiles (usec): 00:28:23.530 | 1.00th=[ 4178], 5.00th=[ 4490], 10.00th=[ 4752], 20.00th=[ 5342], 00:28:23.530 | 30.00th=[ 6128], 40.00th=[ 6521], 50.00th=[ 6915], 60.00th=[ 7504], 00:28:23.530 | 70.00th=[ 8586], 80.00th=[ 9503], 90.00th=[12387], 95.00th=[48497], 00:28:23.530 | 99.00th=[51643], 99.50th=[53216], 99.90th=[91751], 99.95th=[92799], 00:28:23.530 | 99.99th=[92799] 00:28:23.530 bw ( KiB/s): min=21760, max=50944, per=33.56%, avg=34073.60, stdev=9865.10, samples=10 00:28:23.530 iops : min= 170, max= 398, avg=266.20, stdev=77.07, samples=10 00:28:23.530 lat (msec) : 4=0.07%, 10=83.21%, 20=6.90%, 50=6.52%, 100=3.30% 00:28:23.530 cpu : usr=96.43%, sys=3.23%, ctx=11, majf=0, minf=75 00:28:23.530 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.530 issued rwts: total=1334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.530 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:23.530 filename0: (groupid=0, jobs=1): err= 0: pid=179516: Mon Jul 15 22:44:46 2024 00:28:23.530 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(176MiB/5046msec) 00:28:23.530 slat (nsec): min=6271, max=32965, avg=9463.86, stdev=2549.07 00:28:23.530 clat (usec): min=3852, max=53619, avg=10737.77, stdev=11863.60 00:28:23.530 lat (usec): min=3859, max=53627, avg=10747.24, stdev=11863.76 00:28:23.530 clat percentiles (usec): 00:28:23.530 | 1.00th=[ 4080], 5.00th=[ 4555], 10.00th=[ 4817], 20.00th=[ 5669], 00:28:23.530 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7635], 00:28:23.530 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[11731], 95.00th=[48497], 00:28:23.530 | 99.00th=[51119], 99.50th=[52167], 99.90th=[52167], 99.95th=[53740], 00:28:23.530 | 99.99th=[53740] 00:28:23.530 bw ( KiB/s): min=21504, max=49920, per=35.33%, avg=35871.90, stdev=8227.72, samples=10 00:28:23.530 iops : min= 168, max= 390, avg=280.20, stdev=64.31, samples=10 00:28:23.530 lat (msec) : 4=0.43%, 10=83.97%, 20=7.12%, 50=5.84%, 100=2.64% 00:28:23.530 cpu : usr=96.10%, sys=3.53%, ctx=13, majf=0, minf=90 00:28:23.530 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.530 issued rwts: total=1404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.530 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:23.530 00:28:23.530 Run status group 0 (all jobs): 00:28:23.530 READ: bw=99.2MiB/s (104MB/s), 31.4MiB/s-34.8MiB/s (32.9MB/s-36.5MB/s), io=500MiB (525MB), run=5039-5046msec 00:28:23.530 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:23.530 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 bdev_null0 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 [2024-07-15 22:44:46.482494] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 bdev_null1 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 bdev_null2 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:23.531 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.532 { 00:28:23.532 "params": { 00:28:23.532 "name": "Nvme$subsystem", 00:28:23.532 "trtype": "$TEST_TRANSPORT", 00:28:23.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.532 "adrfam": "ipv4", 00:28:23.532 "trsvcid": "$NVMF_PORT", 00:28:23.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.532 "hdgst": ${hdgst:-false}, 00:28:23.532 "ddgst": ${ddgst:-false} 00:28:23.532 }, 00:28:23.532 "method": "bdev_nvme_attach_controller" 00:28:23.532 } 00:28:23.532 EOF 00:28:23.532 )") 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.532 { 00:28:23.532 "params": { 00:28:23.532 "name": "Nvme$subsystem", 00:28:23.532 "trtype": "$TEST_TRANSPORT", 00:28:23.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.532 "adrfam": "ipv4", 00:28:23.532 "trsvcid": "$NVMF_PORT", 00:28:23.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.532 "hdgst": ${hdgst:-false}, 00:28:23.532 "ddgst": ${ddgst:-false} 00:28:23.532 }, 00:28:23.532 "method": "bdev_nvme_attach_controller" 00:28:23.532 } 00:28:23.532 EOF 00:28:23.532 )") 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.532 { 00:28:23.532 "params": { 00:28:23.532 "name": "Nvme$subsystem", 00:28:23.532 "trtype": "$TEST_TRANSPORT", 00:28:23.532 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.532 "adrfam": "ipv4", 00:28:23.532 "trsvcid": "$NVMF_PORT", 00:28:23.532 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.532 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.532 "hdgst": ${hdgst:-false}, 00:28:23.532 "ddgst": ${ddgst:-false} 00:28:23.532 }, 00:28:23.532 "method": "bdev_nvme_attach_controller" 00:28:23.532 } 00:28:23.532 EOF 00:28:23.532 )") 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:23.532 22:44:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:23.532 "params": { 00:28:23.532 "name": "Nvme0", 00:28:23.532 "trtype": "tcp", 00:28:23.532 "traddr": "10.0.0.2", 00:28:23.532 "adrfam": "ipv4", 00:28:23.532 "trsvcid": "4420", 00:28:23.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:23.532 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:23.532 "hdgst": false, 00:28:23.532 "ddgst": false 00:28:23.532 }, 00:28:23.532 "method": "bdev_nvme_attach_controller" 00:28:23.532 },{ 00:28:23.532 "params": { 00:28:23.532 "name": "Nvme1", 00:28:23.532 "trtype": "tcp", 00:28:23.532 "traddr": "10.0.0.2", 00:28:23.532 "adrfam": "ipv4", 00:28:23.532 "trsvcid": "4420", 00:28:23.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:23.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:23.532 "hdgst": false, 00:28:23.532 "ddgst": false 00:28:23.532 }, 00:28:23.532 "method": "bdev_nvme_attach_controller" 00:28:23.532 },{ 00:28:23.532 "params": { 00:28:23.532 "name": "Nvme2", 00:28:23.532 "trtype": "tcp", 00:28:23.532 "traddr": "10.0.0.2", 00:28:23.532 "adrfam": "ipv4", 00:28:23.532 "trsvcid": "4420", 00:28:23.532 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:23.532 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:23.532 "hdgst": false, 00:28:23.532 "ddgst": false 00:28:23.532 }, 00:28:23.532 "method": "bdev_nvme_attach_controller" 00:28:23.532 }' 00:28:23.533 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:23.533 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:23.533 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:23.533 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.533 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:23.533 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:23.533 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:23.533 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:23.533 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:23.533 22:44:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.533 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:23.533 ... 00:28:23.533 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:23.533 ... 00:28:23.533 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:23.533 ... 00:28:23.533 fio-3.35 00:28:23.533 Starting 24 threads 00:28:23.533 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.715 00:28:35.715 filename0: (groupid=0, jobs=1): err= 0: pid=180568: Mon Jul 15 22:44:57 2024 00:28:35.715 read: IOPS=567, BW=2269KiB/s (2323kB/s)(22.2MiB/10015msec) 00:28:35.715 slat (nsec): min=7262, max=95614, avg=47884.37, stdev=17220.12 00:28:35.715 clat (usec): min=18877, max=63344, avg=27817.44, stdev=1991.01 00:28:35.715 lat (usec): min=18919, max=63381, avg=27865.33, stdev=1989.40 00:28:35.715 clat percentiles (usec): 00:28:35.715 | 1.00th=[26608], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:28:35.715 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:28:35.715 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:28:35.715 | 99.00th=[28967], 99.50th=[29230], 99.90th=[63177], 99.95th=[63177], 00:28:35.715 | 99.99th=[63177] 00:28:35.716 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2265.60, stdev=73.12, samples=20 00:28:35.716 iops : min= 512, max= 576, avg=566.40, stdev=18.28, samples=20 00:28:35.716 lat (msec) : 20=0.25%, 50=99.47%, 100=0.28% 00:28:35.716 cpu : usr=99.17%, sys=0.46%, ctx=31, majf=0, minf=68 00:28:35.716 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:35.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.716 filename0: (groupid=0, jobs=1): err= 0: pid=180569: Mon Jul 15 22:44:57 2024 00:28:35.716 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10007msec) 00:28:35.716 slat (nsec): min=7093, max=47734, avg=15805.11, stdev=5862.06 00:28:35.716 clat (usec): min=4940, max=37085, avg=27821.02, stdev=2142.62 00:28:35.716 lat (usec): min=4967, max=37128, avg=27836.82, stdev=2142.64 00:28:35.716 clat percentiles (usec): 00:28:35.716 | 1.00th=[14877], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:28:35.716 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:28:35.716 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:28:35.716 | 99.00th=[29230], 99.50th=[34341], 99.90th=[36963], 99.95th=[36963], 00:28:35.716 | 99.99th=[36963] 00:28:35.716 bw ( KiB/s): min= 2176, max= 2560, per=4.20%, avg=2284.80, stdev=85.87, samples=20 00:28:35.716 iops : min= 544, max= 640, avg=571.20, stdev=21.47, samples=20 00:28:35.716 lat (msec) : 10=0.56%, 20=0.56%, 50=98.88% 00:28:35.716 cpu : usr=98.82%, sys=0.77%, ctx=15, majf=0, minf=37 00:28:35.716 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:35.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.716 filename0: (groupid=0, jobs=1): err= 0: pid=180570: Mon Jul 15 22:44:57 2024 00:28:35.716 read: IOPS=572, BW=2288KiB/s (2343kB/s)(22.4MiB/10013msec) 00:28:35.716 slat (nsec): min=3223, max=97523, avg=29501.50, stdev=22150.32 00:28:35.716 clat (usec): min=4821, max=41352, avg=27758.81, stdev=2205.53 00:28:35.716 lat (usec): min=4836, max=41371, avg=27788.31, stdev=2205.12 00:28:35.716 clat percentiles (usec): 00:28:35.716 | 1.00th=[19268], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:35.716 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:28:35.716 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:28:35.716 | 99.00th=[33162], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:28:35.716 | 99.99th=[41157] 00:28:35.716 bw ( KiB/s): min= 2176, max= 2432, per=4.20%, avg=2284.80, stdev=62.64, samples=20 00:28:35.716 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:28:35.716 lat (msec) : 10=0.56%, 20=0.56%, 50=98.88% 00:28:35.716 cpu : usr=98.67%, sys=0.93%, ctx=17, majf=0, minf=45 00:28:35.716 IO depths : 1=5.8%, 2=11.7%, 4=24.3%, 8=51.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:35.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.716 filename0: (groupid=0, jobs=1): err= 0: pid=180571: Mon Jul 15 22:44:57 2024 00:28:35.716 read: IOPS=567, BW=2271KiB/s (2326kB/s)(22.2MiB/10003msec) 00:28:35.716 slat (nsec): min=6956, max=42601, avg=17985.94, stdev=5458.21 00:28:35.716 clat (usec): min=20122, max=56124, avg=28024.79, stdev=1452.33 00:28:35.716 lat (usec): min=20131, max=56144, avg=28042.78, stdev=1452.29 00:28:35.716 clat percentiles (usec): 00:28:35.716 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:35.716 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:28:35.716 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:28:35.716 | 99.00th=[29230], 99.50th=[30802], 99.90th=[50070], 99.95th=[55837], 00:28:35.716 | 99.99th=[56361] 00:28:35.716 bw ( KiB/s): min= 2048, max= 2304, per=4.16%, avg=2263.58, stdev=74.55, samples=19 00:28:35.716 iops : min= 512, max= 576, avg=565.89, stdev=18.64, samples=19 00:28:35.716 lat (msec) : 50=99.72%, 100=0.28% 00:28:35.716 cpu : usr=98.76%, sys=0.84%, ctx=17, majf=0, minf=43 00:28:35.716 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:35.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.716 filename0: (groupid=0, jobs=1): err= 0: pid=180572: Mon Jul 15 22:44:57 2024 00:28:35.716 read: IOPS=567, BW=2269KiB/s (2324kB/s)(22.2MiB/10012msec) 00:28:35.716 slat (nsec): min=8127, max=98027, avg=47804.50, stdev=21928.91 00:28:35.716 clat (usec): min=20917, max=56583, avg=27834.57, stdev=1639.59 00:28:35.716 lat (usec): min=20925, max=56604, avg=27882.37, stdev=1636.51 00:28:35.716 clat percentiles (usec): 00:28:35.716 | 1.00th=[26608], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:35.716 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:28:35.716 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:28:35.716 | 99.00th=[28967], 99.50th=[29492], 99.90th=[56361], 99.95th=[56361], 00:28:35.716 | 99.99th=[56361] 00:28:35.716 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2265.60, stdev=73.12, samples=20 00:28:35.716 iops : min= 512, max= 576, avg=566.40, stdev=18.28, samples=20 00:28:35.716 lat (msec) : 50=99.72%, 100=0.28% 00:28:35.716 cpu : usr=98.79%, sys=0.82%, ctx=14, majf=0, minf=55 00:28:35.716 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:35.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.716 filename0: (groupid=0, jobs=1): err= 0: pid=180573: Mon Jul 15 22:44:57 2024 00:28:35.716 read: IOPS=567, BW=2269KiB/s (2323kB/s)(22.2MiB/10015msec) 00:28:35.716 slat (nsec): min=6858, max=70283, avg=19793.02, stdev=8739.91 00:28:35.716 clat (usec): min=13758, max=63322, avg=28026.18, stdev=2074.52 00:28:35.716 lat (usec): min=13766, max=63348, avg=28045.97, stdev=2074.29 00:28:35.716 clat percentiles (usec): 00:28:35.716 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:35.716 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:28:35.716 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:28:35.716 | 99.00th=[28967], 99.50th=[28967], 99.90th=[63177], 99.95th=[63177], 00:28:35.716 | 99.99th=[63177] 00:28:35.716 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2265.60, stdev=73.12, samples=20 00:28:35.716 iops : min= 512, max= 576, avg=566.40, stdev=18.28, samples=20 00:28:35.716 lat (msec) : 20=0.14%, 50=99.58%, 100=0.28% 00:28:35.716 cpu : usr=98.92%, sys=0.69%, ctx=13, majf=0, minf=46 00:28:35.716 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:35.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.716 filename0: (groupid=0, jobs=1): err= 0: pid=180574: Mon Jul 15 22:44:57 2024 00:28:35.716 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10011msec) 00:28:35.716 slat (nsec): min=3233, max=97255, avg=40406.08, stdev=23163.43 00:28:35.716 clat (usec): min=4820, max=41282, avg=27668.62, stdev=2068.62 00:28:35.716 lat (usec): min=4835, max=41302, avg=27709.03, stdev=2068.62 00:28:35.716 clat percentiles (usec): 00:28:35.716 | 1.00th=[19006], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:35.716 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:28:35.716 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:28:35.716 | 99.00th=[28967], 99.50th=[29230], 99.90th=[37487], 99.95th=[37487], 00:28:35.716 | 99.99th=[41157] 00:28:35.716 bw ( KiB/s): min= 2176, max= 2432, per=4.20%, avg=2284.80, stdev=62.64, samples=20 00:28:35.716 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:28:35.716 lat (msec) : 10=0.56%, 20=0.56%, 50=98.88% 00:28:35.716 cpu : usr=98.95%, sys=0.66%, ctx=13, majf=0, minf=46 00:28:35.716 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:35.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.716 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.716 filename0: (groupid=0, jobs=1): err= 0: pid=180575: Mon Jul 15 22:44:57 2024 00:28:35.716 read: IOPS=568, BW=2274KiB/s (2328kB/s)(22.2MiB/10006msec) 00:28:35.716 slat (nsec): min=5896, max=69570, avg=18832.47, stdev=9094.78 00:28:35.716 clat (usec): min=11679, max=56682, avg=27988.97, stdev=2571.92 00:28:35.716 lat (usec): min=11690, max=56698, avg=28007.81, stdev=2571.65 00:28:35.716 clat percentiles (usec): 00:28:35.716 | 1.00th=[19792], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:35.716 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:28:35.717 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28967], 00:28:35.717 | 99.00th=[35390], 99.50th=[46400], 99.90th=[56886], 99.95th=[56886], 00:28:35.717 | 99.99th=[56886] 00:28:35.717 bw ( KiB/s): min= 2048, max= 2336, per=4.17%, avg=2266.95, stdev=71.76, samples=19 00:28:35.717 iops : min= 512, max= 584, avg=566.74, stdev=17.94, samples=19 00:28:35.717 lat (msec) : 20=1.21%, 50=98.51%, 100=0.28% 00:28:35.717 cpu : usr=98.91%, sys=0.71%, ctx=13, majf=0, minf=61 00:28:35.717 IO depths : 1=3.2%, 2=8.6%, 4=22.1%, 8=56.2%, 16=9.9%, 32=0.0%, >=64=0.0% 00:28:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 complete : 0=0.0%, 4=93.6%, 8=1.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 issued rwts: total=5688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.717 filename1: (groupid=0, jobs=1): err= 0: pid=180576: Mon Jul 15 22:44:57 2024 00:28:35.717 read: IOPS=567, BW=2269KiB/s (2323kB/s)(22.2MiB/10015msec) 00:28:35.717 slat (nsec): min=6920, max=54957, avg=19115.74, stdev=5951.64 00:28:35.717 clat (usec): min=14259, max=63270, avg=28035.17, stdev=1968.15 00:28:35.717 lat (usec): min=14268, max=63293, avg=28054.28, stdev=1968.19 00:28:35.717 clat percentiles (usec): 00:28:35.717 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:35.717 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:28:35.717 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:28:35.717 | 99.00th=[28967], 99.50th=[28967], 99.90th=[63177], 99.95th=[63177], 00:28:35.717 | 99.99th=[63177] 00:28:35.717 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2265.60, stdev=73.12, samples=20 00:28:35.717 iops : min= 512, max= 576, avg=566.40, stdev=18.28, samples=20 00:28:35.717 lat (msec) : 20=0.04%, 50=99.68%, 100=0.28% 00:28:35.717 cpu : usr=98.53%, sys=0.94%, ctx=41, majf=0, minf=38 00:28:35.717 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.717 filename1: (groupid=0, jobs=1): err= 0: pid=180577: Mon Jul 15 22:44:57 2024 00:28:35.717 read: IOPS=567, BW=2269KiB/s (2324kB/s)(22.2MiB/10012msec) 00:28:35.717 slat (nsec): min=8952, max=97430, avg=50697.40, stdev=19658.42 00:28:35.717 clat (usec): min=22205, max=56534, avg=27751.47, stdev=1608.86 00:28:35.717 lat (usec): min=22260, max=56555, avg=27802.17, stdev=1607.56 00:28:35.717 clat percentiles (usec): 00:28:35.717 | 1.00th=[26608], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:35.717 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:28:35.717 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:28:35.717 | 99.00th=[28705], 99.50th=[28967], 99.90th=[56361], 99.95th=[56361], 00:28:35.717 | 99.99th=[56361] 00:28:35.717 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2265.60, stdev=73.12, samples=20 00:28:35.717 iops : min= 512, max= 576, avg=566.40, stdev=18.28, samples=20 00:28:35.717 lat (msec) : 50=99.72%, 100=0.28% 00:28:35.717 cpu : usr=98.28%, sys=1.02%, ctx=101, majf=0, minf=48 00:28:35.717 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.717 filename1: (groupid=0, jobs=1): err= 0: pid=180578: Mon Jul 15 22:44:57 2024 00:28:35.717 read: IOPS=572, BW=2289KiB/s (2344kB/s)(22.4MiB/10008msec) 00:28:35.717 slat (nsec): min=6513, max=97771, avg=24183.35, stdev=20134.46 00:28:35.717 clat (usec): min=4729, max=41272, avg=27783.77, stdev=2415.41 00:28:35.717 lat (usec): min=4746, max=41291, avg=27807.95, stdev=2415.50 00:28:35.717 clat percentiles (usec): 00:28:35.717 | 1.00th=[19006], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:35.717 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:28:35.717 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:28:35.717 | 99.00th=[33162], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:28:35.717 | 99.99th=[41157] 00:28:35.717 bw ( KiB/s): min= 2176, max= 2560, per=4.20%, avg=2284.80, stdev=87.27, samples=20 00:28:35.717 iops : min= 544, max= 640, avg=571.20, stdev=21.82, samples=20 00:28:35.717 lat (msec) : 10=0.84%, 20=0.24%, 50=98.92% 00:28:35.717 cpu : usr=98.68%, sys=0.94%, ctx=14, majf=0, minf=46 00:28:35.717 IO depths : 1=5.4%, 2=11.1%, 4=23.3%, 8=53.1%, 16=7.1%, 32=0.0%, >=64=0.0% 00:28:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.717 filename1: (groupid=0, jobs=1): err= 0: pid=180579: Mon Jul 15 22:44:57 2024 00:28:35.717 read: IOPS=568, BW=2276KiB/s (2330kB/s)(22.2MiB/10008msec) 00:28:35.717 slat (nsec): min=5773, max=97839, avg=49640.33, stdev=21363.55 00:28:35.717 clat (usec): min=8657, max=51769, avg=27685.83, stdev=2010.23 00:28:35.717 lat (usec): min=8676, max=51785, avg=27735.47, stdev=2009.63 00:28:35.717 clat percentiles (usec): 00:28:35.717 | 1.00th=[22414], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:35.717 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:28:35.717 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:28:35.717 | 99.00th=[29492], 99.50th=[35390], 99.90th=[51643], 99.95th=[51643], 00:28:35.717 | 99.99th=[51643] 00:28:35.717 bw ( KiB/s): min= 2048, max= 2304, per=4.16%, avg=2263.58, stdev=74.55, samples=19 00:28:35.717 iops : min= 512, max= 576, avg=565.89, stdev=18.64, samples=19 00:28:35.717 lat (msec) : 10=0.25%, 20=0.28%, 50=99.19%, 100=0.28% 00:28:35.717 cpu : usr=98.64%, sys=0.97%, ctx=19, majf=0, minf=46 00:28:35.717 IO depths : 1=5.8%, 2=11.7%, 4=24.4%, 8=51.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:28:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 issued rwts: total=5694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.717 filename1: (groupid=0, jobs=1): err= 0: pid=180580: Mon Jul 15 22:44:57 2024 00:28:35.717 read: IOPS=569, BW=2278KiB/s (2332kB/s)(22.2MiB/10003msec) 00:28:35.717 slat (nsec): min=7242, max=97641, avg=52167.84, stdev=20911.72 00:28:35.717 clat (usec): min=8162, max=54737, avg=27608.48, stdev=2011.60 00:28:35.717 lat (usec): min=8185, max=54751, avg=27660.65, stdev=2012.51 00:28:35.717 clat percentiles (usec): 00:28:35.717 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:35.717 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:35.717 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:35.717 | 99.00th=[28705], 99.50th=[28967], 99.90th=[54789], 99.95th=[54789], 00:28:35.717 | 99.99th=[54789] 00:28:35.717 bw ( KiB/s): min= 2052, max= 2304, per=4.16%, avg=2263.79, stdev=73.91, samples=19 00:28:35.717 iops : min= 513, max= 576, avg=565.95, stdev=18.48, samples=19 00:28:35.717 lat (msec) : 10=0.28%, 20=0.28%, 50=99.16%, 100=0.28% 00:28:35.717 cpu : usr=98.86%, sys=0.75%, ctx=13, majf=0, minf=33 00:28:35.717 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.717 filename1: (groupid=0, jobs=1): err= 0: pid=180581: Mon Jul 15 22:44:57 2024 00:28:35.717 read: IOPS=567, BW=2271KiB/s (2326kB/s)(22.2MiB/10003msec) 00:28:35.717 slat (nsec): min=6989, max=68696, avg=18315.20, stdev=5488.73 00:28:35.717 clat (usec): min=20230, max=56468, avg=28010.85, stdev=1624.85 00:28:35.717 lat (usec): min=20266, max=56488, avg=28029.16, stdev=1624.71 00:28:35.717 clat percentiles (usec): 00:28:35.717 | 1.00th=[26608], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:35.717 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:28:35.717 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:28:35.717 | 99.00th=[29230], 99.50th=[34341], 99.90th=[56361], 99.95th=[56361], 00:28:35.717 | 99.99th=[56361] 00:28:35.717 bw ( KiB/s): min= 2048, max= 2320, per=4.16%, avg=2263.58, stdev=74.74, samples=19 00:28:35.717 iops : min= 512, max= 580, avg=565.89, stdev=18.68, samples=19 00:28:35.717 lat (msec) : 50=99.72%, 100=0.28% 00:28:35.717 cpu : usr=99.02%, sys=0.59%, ctx=9, majf=0, minf=46 00:28:35.717 IO depths : 1=6.0%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.717 filename1: (groupid=0, jobs=1): err= 0: pid=180582: Mon Jul 15 22:44:57 2024 00:28:35.717 read: IOPS=568, BW=2274KiB/s (2328kB/s)(22.2MiB/10003msec) 00:28:35.717 slat (nsec): min=6825, max=41194, avg=15912.25, stdev=6327.42 00:28:35.717 clat (usec): min=6543, max=61824, avg=28082.50, stdev=2409.28 00:28:35.717 lat (usec): min=6552, max=61837, avg=28098.41, stdev=2409.31 00:28:35.717 clat percentiles (usec): 00:28:35.717 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27919], 00:28:35.717 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28181], 00:28:35.717 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:28:35.717 | 99.00th=[28967], 99.50th=[42206], 99.90th=[61604], 99.95th=[61604], 00:28:35.717 | 99.99th=[61604] 00:28:35.717 bw ( KiB/s): min= 2052, max= 2512, per=4.18%, avg=2272.20, stdev=80.51, samples=20 00:28:35.717 iops : min= 513, max= 628, avg=568.05, stdev=20.13, samples=20 00:28:35.717 lat (msec) : 10=0.18%, 20=0.49%, 50=99.05%, 100=0.28% 00:28:35.717 cpu : usr=98.88%, sys=0.73%, ctx=10, majf=0, minf=56 00:28:35.717 IO depths : 1=0.5%, 2=1.0%, 4=2.1%, 8=78.6%, 16=17.7%, 32=0.0%, >=64=0.0% 00:28:35.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 complete : 0=0.0%, 4=89.8%, 8=9.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.717 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.717 filename1: (groupid=0, jobs=1): err= 0: pid=180583: Mon Jul 15 22:44:57 2024 00:28:35.717 read: IOPS=584, BW=2340KiB/s (2396kB/s)(22.9MiB/10011msec) 00:28:35.717 slat (nsec): min=6828, max=97798, avg=31847.91, stdev=24332.08 00:28:35.717 clat (usec): min=11134, max=66654, avg=27106.64, stdev=3151.91 00:28:35.717 lat (usec): min=11142, max=66677, avg=27138.48, stdev=3155.29 00:28:35.717 clat percentiles (usec): 00:28:35.717 | 1.00th=[16581], 5.00th=[20317], 10.00th=[24511], 20.00th=[27132], 00:28:35.717 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:28:35.717 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:28:35.717 | 99.00th=[33817], 99.50th=[35390], 99.90th=[47973], 99.95th=[47973], 00:28:35.718 | 99.99th=[66847] 00:28:35.718 bw ( KiB/s): min= 2176, max= 2640, per=4.30%, avg=2336.00, stdev=108.27, samples=20 00:28:35.718 iops : min= 544, max= 660, avg=584.00, stdev=27.07, samples=20 00:28:35.718 lat (msec) : 20=4.92%, 50=95.05%, 100=0.03% 00:28:35.718 cpu : usr=98.66%, sys=0.95%, ctx=14, majf=0, minf=38 00:28:35.718 IO depths : 1=3.6%, 2=7.2%, 4=15.6%, 8=63.0%, 16=10.6%, 32=0.0%, >=64=0.0% 00:28:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 complete : 0=0.0%, 4=92.0%, 8=3.9%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 issued rwts: total=5856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.718 filename2: (groupid=0, jobs=1): err= 0: pid=180584: Mon Jul 15 22:44:57 2024 00:28:35.718 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10007msec) 00:28:35.718 slat (nsec): min=6085, max=98424, avg=51974.99, stdev=20553.34 00:28:35.718 clat (usec): min=8274, max=58950, avg=27630.22, stdev=2202.09 00:28:35.718 lat (usec): min=8289, max=58968, avg=27682.20, stdev=2202.43 00:28:35.718 clat percentiles (usec): 00:28:35.718 | 1.00th=[26084], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:35.718 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:35.718 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:35.718 | 99.00th=[28967], 99.50th=[29230], 99.90th=[58983], 99.95th=[58983], 00:28:35.718 | 99.99th=[58983] 00:28:35.718 bw ( KiB/s): min= 2052, max= 2304, per=4.16%, avg=2263.79, stdev=72.55, samples=19 00:28:35.718 iops : min= 513, max= 576, avg=565.95, stdev=18.14, samples=19 00:28:35.718 lat (msec) : 10=0.28%, 20=0.28%, 50=99.16%, 100=0.28% 00:28:35.718 cpu : usr=98.64%, sys=0.98%, ctx=7, majf=0, minf=47 00:28:35.718 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.718 filename2: (groupid=0, jobs=1): err= 0: pid=180585: Mon Jul 15 22:44:57 2024 00:28:35.718 read: IOPS=567, BW=2269KiB/s (2323kB/s)(22.2MiB/10014msec) 00:28:35.718 slat (nsec): min=6760, max=69743, avg=18071.50, stdev=7593.58 00:28:35.718 clat (usec): min=14917, max=76535, avg=28055.35, stdev=2041.02 00:28:35.718 lat (usec): min=14929, max=76572, avg=28073.42, stdev=2041.12 00:28:35.718 clat percentiles (usec): 00:28:35.718 | 1.00th=[26870], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:35.718 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:28:35.718 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:28:35.718 | 99.00th=[28967], 99.50th=[28967], 99.90th=[63177], 99.95th=[63177], 00:28:35.718 | 99.99th=[76022] 00:28:35.718 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2265.60, stdev=73.12, samples=20 00:28:35.718 iops : min= 512, max= 576, avg=566.40, stdev=18.28, samples=20 00:28:35.718 lat (msec) : 20=0.04%, 50=99.68%, 100=0.28% 00:28:35.718 cpu : usr=98.82%, sys=0.80%, ctx=8, majf=0, minf=46 00:28:35.718 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.718 filename2: (groupid=0, jobs=1): err= 0: pid=180586: Mon Jul 15 22:44:57 2024 00:28:35.718 read: IOPS=568, BW=2274KiB/s (2328kB/s)(22.2MiB/10010msec) 00:28:35.718 slat (nsec): min=5742, max=41635, avg=18263.03, stdev=5562.82 00:28:35.718 clat (usec): min=10334, max=56454, avg=27983.58, stdev=2162.09 00:28:35.718 lat (usec): min=10342, max=56470, avg=28001.84, stdev=2162.09 00:28:35.718 clat percentiles (usec): 00:28:35.718 | 1.00th=[21627], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:28:35.718 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:28:35.718 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:28:35.718 | 99.00th=[33817], 99.50th=[35390], 99.90th=[56361], 99.95th=[56361], 00:28:35.718 | 99.99th=[56361] 00:28:35.718 bw ( KiB/s): min= 2048, max= 2304, per=4.16%, avg=2261.05, stdev=73.91, samples=19 00:28:35.718 iops : min= 512, max= 576, avg=565.26, stdev=18.48, samples=19 00:28:35.718 lat (msec) : 20=0.56%, 50=99.16%, 100=0.28% 00:28:35.718 cpu : usr=98.90%, sys=0.71%, ctx=13, majf=0, minf=46 00:28:35.718 IO depths : 1=5.7%, 2=11.7%, 4=24.3%, 8=51.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:28:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 issued rwts: total=5690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.718 filename2: (groupid=0, jobs=1): err= 0: pid=180587: Mon Jul 15 22:44:57 2024 00:28:35.718 read: IOPS=568, BW=2274KiB/s (2329kB/s)(22.2MiB/10003msec) 00:28:35.718 slat (nsec): min=7020, max=96774, avg=51114.55, stdev=21897.01 00:28:35.718 clat (usec): min=6586, max=54750, avg=27641.59, stdev=1824.99 00:28:35.718 lat (usec): min=6601, max=54763, avg=27692.70, stdev=1826.08 00:28:35.718 clat percentiles (usec): 00:28:35.718 | 1.00th=[26346], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:35.718 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:35.718 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:35.718 | 99.00th=[28705], 99.50th=[28967], 99.90th=[54789], 99.95th=[54789], 00:28:35.718 | 99.99th=[54789] 00:28:35.718 bw ( KiB/s): min= 2052, max= 2432, per=4.18%, avg=2272.20, stdev=81.18, samples=20 00:28:35.718 iops : min= 513, max= 608, avg=568.05, stdev=20.29, samples=20 00:28:35.718 lat (msec) : 10=0.12%, 20=0.28%, 50=99.31%, 100=0.28% 00:28:35.718 cpu : usr=99.04%, sys=0.57%, ctx=7, majf=0, minf=40 00:28:35.718 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 issued rwts: total=5687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.718 filename2: (groupid=0, jobs=1): err= 0: pid=180588: Mon Jul 15 22:44:57 2024 00:28:35.718 read: IOPS=567, BW=2269KiB/s (2324kB/s)(22.2MiB/10012msec) 00:28:35.718 slat (nsec): min=6989, max=98043, avg=45333.26, stdev=22539.00 00:28:35.718 clat (usec): min=21942, max=56581, avg=27848.65, stdev=1640.66 00:28:35.718 lat (usec): min=21950, max=56599, avg=27893.98, stdev=1638.01 00:28:35.718 clat percentiles (usec): 00:28:35.718 | 1.00th=[26608], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:35.718 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:28:35.718 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:28:35.718 | 99.00th=[28967], 99.50th=[29492], 99.90th=[56361], 99.95th=[56361], 00:28:35.718 | 99.99th=[56361] 00:28:35.718 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2265.60, stdev=73.12, samples=20 00:28:35.718 iops : min= 512, max= 576, avg=566.40, stdev=18.28, samples=20 00:28:35.718 lat (msec) : 50=99.72%, 100=0.28% 00:28:35.718 cpu : usr=98.75%, sys=0.85%, ctx=11, majf=0, minf=47 00:28:35.718 IO depths : 1=6.0%, 2=11.9%, 4=24.3%, 8=51.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.718 filename2: (groupid=0, jobs=1): err= 0: pid=180589: Mon Jul 15 22:44:57 2024 00:28:35.718 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.5MiB/10003msec) 00:28:35.718 slat (nsec): min=6845, max=94801, avg=21974.82, stdev=17926.49 00:28:35.718 clat (usec): min=7529, max=68572, avg=31870.97, stdev=5751.56 00:28:35.718 lat (usec): min=7538, max=68587, avg=31892.95, stdev=5753.21 00:28:35.718 clat percentiles (usec): 00:28:35.718 | 1.00th=[20317], 5.00th=[24773], 10.00th=[27657], 20.00th=[28181], 00:28:35.718 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28705], 60.00th=[33162], 00:28:35.718 | 70.00th=[34866], 80.00th=[35390], 90.00th=[41681], 95.00th=[42206], 00:28:35.718 | 99.00th=[43254], 99.50th=[43254], 99.90th=[54789], 99.95th=[54789], 00:28:35.718 | 99.99th=[68682] 00:28:35.718 bw ( KiB/s): min= 1664, max= 2320, per=3.69%, avg=2005.21, stdev=254.79, samples=19 00:28:35.718 iops : min= 416, max= 580, avg=501.26, stdev=63.74, samples=19 00:28:35.718 lat (msec) : 10=0.08%, 20=0.68%, 50=98.92%, 100=0.32% 00:28:35.718 cpu : usr=98.44%, sys=1.16%, ctx=19, majf=0, minf=75 00:28:35.718 IO depths : 1=0.1%, 2=0.7%, 4=11.8%, 8=73.0%, 16=14.5%, 32=0.0%, >=64=0.0% 00:28:35.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 complete : 0=0.0%, 4=92.0%, 8=4.5%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.718 issued rwts: total=5002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.718 filename2: (groupid=0, jobs=1): err= 0: pid=180590: Mon Jul 15 22:44:57 2024 00:28:35.718 read: IOPS=567, BW=2269KiB/s (2323kB/s)(22.2MiB/10014msec) 00:28:35.718 slat (nsec): min=6813, max=56388, avg=16419.35, stdev=5705.20 00:28:35.718 clat (usec): min=14317, max=76545, avg=28067.91, stdev=2259.88 00:28:35.718 lat (usec): min=14332, max=76586, avg=28084.33, stdev=2260.40 00:28:35.718 clat percentiles (usec): 00:28:35.718 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27395], 20.00th=[27919], 00:28:35.718 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[28181], 00:28:35.718 | 70.00th=[28181], 80.00th=[28181], 90.00th=[28443], 95.00th=[28705], 00:28:35.719 | 99.00th=[28967], 99.50th=[28967], 99.90th=[63177], 99.95th=[76022], 00:28:35.719 | 99.99th=[76022] 00:28:35.719 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2265.60, stdev=73.12, samples=20 00:28:35.719 iops : min= 512, max= 576, avg=566.40, stdev=18.28, samples=20 00:28:35.719 lat (msec) : 20=0.26%, 50=99.45%, 100=0.28% 00:28:35.719 cpu : usr=98.88%, sys=0.73%, ctx=8, majf=0, minf=40 00:28:35.719 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:35.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.719 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.719 issued rwts: total=5680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.719 filename2: (groupid=0, jobs=1): err= 0: pid=180591: Mon Jul 15 22:44:57 2024 00:28:35.719 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:28:35.719 slat (nsec): min=4073, max=97706, avg=52357.59, stdev=20338.64 00:28:35.719 clat (usec): min=8375, max=62225, avg=27622.43, stdev=2101.38 00:28:35.719 lat (usec): min=8388, max=62239, avg=27674.78, stdev=2101.81 00:28:35.719 clat percentiles (usec): 00:28:35.719 | 1.00th=[22938], 5.00th=[26870], 10.00th=[27132], 20.00th=[27395], 00:28:35.719 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:28:35.719 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:28:35.719 | 99.00th=[28705], 99.50th=[29230], 99.90th=[55313], 99.95th=[55313], 00:28:35.719 | 99.99th=[62129] 00:28:35.719 bw ( KiB/s): min= 2048, max= 2304, per=4.16%, avg=2263.58, stdev=74.55, samples=19 00:28:35.719 iops : min= 512, max= 576, avg=565.89, stdev=18.64, samples=19 00:28:35.719 lat (msec) : 10=0.28%, 20=0.28%, 50=99.16%, 100=0.28% 00:28:35.719 cpu : usr=98.86%, sys=0.72%, ctx=12, majf=0, minf=41 00:28:35.719 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:35.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.719 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:35.719 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:35.719 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:35.719 00:28:35.719 Run status group 0 (all jobs): 00:28:35.719 READ: bw=53.1MiB/s (55.7MB/s), 2000KiB/s-2340KiB/s (2048kB/s-2396kB/s), io=532MiB (557MB), run=10003-10015msec 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 bdev_null0 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 [2024-07-15 22:44:58.142510] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 bdev_null1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:35.719 22:44:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.720 { 00:28:35.720 "params": { 00:28:35.720 "name": "Nvme$subsystem", 00:28:35.720 "trtype": "$TEST_TRANSPORT", 00:28:35.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.720 "adrfam": "ipv4", 00:28:35.720 "trsvcid": "$NVMF_PORT", 00:28:35.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.720 "hdgst": ${hdgst:-false}, 00:28:35.720 "ddgst": ${ddgst:-false} 00:28:35.720 }, 00:28:35.720 "method": "bdev_nvme_attach_controller" 00:28:35.720 } 00:28:35.720 EOF 00:28:35.720 )") 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:35.720 { 00:28:35.720 "params": { 00:28:35.720 "name": "Nvme$subsystem", 00:28:35.720 "trtype": "$TEST_TRANSPORT", 00:28:35.720 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:35.720 "adrfam": "ipv4", 00:28:35.720 "trsvcid": "$NVMF_PORT", 00:28:35.720 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:35.720 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:35.720 "hdgst": ${hdgst:-false}, 00:28:35.720 "ddgst": ${ddgst:-false} 00:28:35.720 }, 00:28:35.720 "method": "bdev_nvme_attach_controller" 00:28:35.720 } 00:28:35.720 EOF 00:28:35.720 )") 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:35.720 "params": { 00:28:35.720 "name": "Nvme0", 00:28:35.720 "trtype": "tcp", 00:28:35.720 "traddr": "10.0.0.2", 00:28:35.720 "adrfam": "ipv4", 00:28:35.720 "trsvcid": "4420", 00:28:35.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:35.720 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:35.720 "hdgst": false, 00:28:35.720 "ddgst": false 00:28:35.720 }, 00:28:35.720 "method": "bdev_nvme_attach_controller" 00:28:35.720 },{ 00:28:35.720 "params": { 00:28:35.720 "name": "Nvme1", 00:28:35.720 "trtype": "tcp", 00:28:35.720 "traddr": "10.0.0.2", 00:28:35.720 "adrfam": "ipv4", 00:28:35.720 "trsvcid": "4420", 00:28:35.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:35.720 "hdgst": false, 00:28:35.720 "ddgst": false 00:28:35.720 }, 00:28:35.720 "method": "bdev_nvme_attach_controller" 00:28:35.720 }' 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:35.720 22:44:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:35.720 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:35.720 ... 00:28:35.720 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:35.720 ... 00:28:35.720 fio-3.35 00:28:35.720 Starting 4 threads 00:28:35.720 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.012 00:28:41.012 filename0: (groupid=0, jobs=1): err= 0: pid=182540: Mon Jul 15 22:45:04 2024 00:28:41.012 read: IOPS=2568, BW=20.1MiB/s (21.0MB/s)(100MiB/5002msec) 00:28:41.012 slat (nsec): min=5993, max=58593, avg=13302.40, stdev=9201.78 00:28:41.012 clat (usec): min=1158, max=44271, avg=3075.97, stdev=1165.45 00:28:41.012 lat (usec): min=1170, max=44303, avg=3089.27, stdev=1165.15 00:28:41.012 clat percentiles (usec): 00:28:41.012 | 1.00th=[ 2040], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2704], 00:28:41.012 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 3032], 00:28:41.012 | 70.00th=[ 3097], 80.00th=[ 3294], 90.00th=[ 3818], 95.00th=[ 4293], 00:28:41.012 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5473], 99.95th=[44303], 00:28:41.012 | 99.99th=[44303] 00:28:41.012 bw ( KiB/s): min=18592, max=22448, per=24.62%, avg=20574.22, stdev=1022.64, samples=9 00:28:41.012 iops : min= 2324, max= 2806, avg=2571.78, stdev=127.83, samples=9 00:28:41.012 lat (msec) : 2=0.83%, 4=90.57%, 10=8.54%, 50=0.06% 00:28:41.012 cpu : usr=96.80%, sys=2.84%, ctx=9, majf=0, minf=0 00:28:41.012 IO depths : 1=0.3%, 2=2.9%, 4=68.5%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.012 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.012 issued rwts: total=12848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.012 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:41.012 filename0: (groupid=0, jobs=1): err= 0: pid=182541: Mon Jul 15 22:45:04 2024 00:28:41.012 read: IOPS=2629, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec) 00:28:41.012 slat (nsec): min=5996, max=56324, avg=10774.60, stdev=5917.99 00:28:41.012 clat (usec): min=966, max=5615, avg=3011.50, stdev=551.71 00:28:41.012 lat (usec): min=973, max=5628, avg=3022.28, stdev=551.40 00:28:41.012 clat percentiles (usec): 00:28:41.012 | 1.00th=[ 1450], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2671], 00:28:41.012 | 30.00th=[ 2769], 40.00th=[ 2868], 50.00th=[ 2933], 60.00th=[ 3032], 00:28:41.012 | 70.00th=[ 3130], 80.00th=[ 3326], 90.00th=[ 3720], 95.00th=[ 4146], 00:28:41.012 | 99.00th=[ 4621], 99.50th=[ 4817], 99.90th=[ 5342], 99.95th=[ 5407], 00:28:41.012 | 99.99th=[ 5538] 00:28:41.012 bw ( KiB/s): min=20224, max=22128, per=25.16%, avg=21019.78, stdev=558.51, samples=9 00:28:41.012 iops : min= 2528, max= 2766, avg=2627.44, stdev=69.81, samples=9 00:28:41.012 lat (usec) : 1000=0.02% 00:28:41.012 lat (msec) : 2=2.14%, 4=91.55%, 10=6.30% 00:28:41.012 cpu : usr=97.64%, sys=1.98%, ctx=9, majf=0, minf=2 00:28:41.012 IO depths : 1=0.1%, 2=2.9%, 4=68.6%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.012 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.012 issued rwts: total=13151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.012 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:41.012 filename1: (groupid=0, jobs=1): err= 0: pid=182542: Mon Jul 15 22:45:04 2024 00:28:41.012 read: IOPS=2549, BW=19.9MiB/s (20.9MB/s)(99.6MiB/5001msec) 00:28:41.012 slat (nsec): min=6101, max=58650, avg=10745.49, stdev=5804.27 00:28:41.012 clat (usec): min=962, max=45550, avg=3106.57, stdev=1184.56 00:28:41.012 lat (usec): min=974, max=45572, avg=3117.32, stdev=1184.34 00:28:41.012 clat percentiles (usec): 00:28:41.012 | 1.00th=[ 2089], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2737], 00:28:41.012 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3064], 00:28:41.012 | 70.00th=[ 3163], 80.00th=[ 3359], 90.00th=[ 3785], 95.00th=[ 4228], 00:28:41.012 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5342], 99.95th=[45351], 00:28:41.012 | 99.99th=[45351] 00:28:41.012 bw ( KiB/s): min=18032, max=21392, per=24.41%, avg=20400.00, stdev=1036.08, samples=9 00:28:41.012 iops : min= 2254, max= 2674, avg=2550.00, stdev=129.51, samples=9 00:28:41.012 lat (usec) : 1000=0.01% 00:28:41.012 lat (msec) : 2=0.50%, 4=91.83%, 10=7.60%, 50=0.06% 00:28:41.012 cpu : usr=96.62%, sys=2.78%, ctx=146, majf=0, minf=9 00:28:41.012 IO depths : 1=0.3%, 2=2.9%, 4=69.3%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.012 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.012 issued rwts: total=12750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.012 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:41.012 filename1: (groupid=0, jobs=1): err= 0: pid=182543: Mon Jul 15 22:45:04 2024 00:28:41.012 read: IOPS=2697, BW=21.1MiB/s (22.1MB/s)(105MiB/5002msec) 00:28:41.012 slat (nsec): min=6186, max=58523, avg=11786.69, stdev=6348.46 00:28:41.012 clat (usec): min=1044, max=5507, avg=2930.04, stdev=467.66 00:28:41.012 lat (usec): min=1054, max=5515, avg=2941.83, stdev=467.44 00:28:41.012 clat percentiles (usec): 00:28:41.012 | 1.00th=[ 1500], 5.00th=[ 2245], 10.00th=[ 2474], 20.00th=[ 2638], 00:28:41.012 | 30.00th=[ 2769], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2966], 00:28:41.012 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3458], 95.00th=[ 3851], 00:28:41.012 | 99.00th=[ 4424], 99.50th=[ 4621], 99.90th=[ 4883], 99.95th=[ 5014], 00:28:41.012 | 99.99th=[ 5473] 00:28:41.012 bw ( KiB/s): min=21008, max=22288, per=25.84%, avg=21591.11, stdev=452.70, samples=9 00:28:41.012 iops : min= 2626, max= 2786, avg=2698.89, stdev=56.59, samples=9 00:28:41.012 lat (msec) : 2=2.01%, 4=94.43%, 10=3.56% 00:28:41.012 cpu : usr=94.62%, sys=3.34%, ctx=207, majf=0, minf=9 00:28:41.012 IO depths : 1=0.1%, 2=1.7%, 4=70.7%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:41.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.012 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:41.012 issued rwts: total=13495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:41.012 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:41.012 00:28:41.012 Run status group 0 (all jobs): 00:28:41.012 READ: bw=81.6MiB/s (85.6MB/s), 19.9MiB/s-21.1MiB/s (20.9MB/s-22.1MB/s), io=408MiB (428MB), run=5001-5002msec 00:28:41.012 22:45:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:41.012 22:45:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.013 00:28:41.013 real 0m24.057s 00:28:41.013 user 4m51.692s 00:28:41.013 sys 0m3.970s 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:41.013 22:45:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:41.013 ************************************ 00:28:41.013 END TEST fio_dif_rand_params 00:28:41.013 ************************************ 00:28:41.013 22:45:04 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:41.013 22:45:04 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:41.013 22:45:04 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:41.013 22:45:04 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:41.013 22:45:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:41.013 ************************************ 00:28:41.013 START TEST fio_dif_digest 00:28:41.013 ************************************ 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.013 bdev_null0 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:41.013 [2024-07-15 22:45:04.435956] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:41.013 { 00:28:41.013 "params": { 00:28:41.013 "name": "Nvme$subsystem", 00:28:41.013 "trtype": "$TEST_TRANSPORT", 00:28:41.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.013 "adrfam": "ipv4", 00:28:41.013 "trsvcid": "$NVMF_PORT", 00:28:41.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.013 "hdgst": ${hdgst:-false}, 00:28:41.013 "ddgst": ${ddgst:-false} 00:28:41.013 }, 00:28:41.013 "method": "bdev_nvme_attach_controller" 00:28:41.013 } 00:28:41.013 EOF 00:28:41.013 )") 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:28:41.013 22:45:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:41.013 "params": { 00:28:41.013 "name": "Nvme0", 00:28:41.013 "trtype": "tcp", 00:28:41.013 "traddr": "10.0.0.2", 00:28:41.013 "adrfam": "ipv4", 00:28:41.014 "trsvcid": "4420", 00:28:41.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:41.014 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:41.014 "hdgst": true, 00:28:41.014 "ddgst": true 00:28:41.014 }, 00:28:41.014 "method": "bdev_nvme_attach_controller" 00:28:41.014 }' 00:28:41.014 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:41.014 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:41.014 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:41.014 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:41.014 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:41.014 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:41.014 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:41.014 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:41.014 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:41.014 22:45:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:41.014 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:41.014 ... 00:28:41.014 fio-3.35 00:28:41.014 Starting 3 threads 00:28:41.014 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.226 00:28:53.226 filename0: (groupid=0, jobs=1): err= 0: pid=183897: Mon Jul 15 22:45:15 2024 00:28:53.226 read: IOPS=276, BW=34.5MiB/s (36.2MB/s)(347MiB/10044msec) 00:28:53.226 slat (nsec): min=6494, max=48757, avg=11870.25, stdev=2061.60 00:28:53.226 clat (usec): min=6389, max=54591, avg=10837.32, stdev=2434.44 00:28:53.226 lat (usec): min=6397, max=54623, avg=10849.19, stdev=2434.55 00:28:53.226 clat percentiles (usec): 00:28:53.226 | 1.00th=[ 7701], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:28:53.226 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:28:53.226 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:28:53.226 | 99.00th=[12780], 99.50th=[13304], 99.90th=[54264], 99.95th=[54264], 00:28:53.226 | 99.99th=[54789] 00:28:53.226 bw ( KiB/s): min=32512, max=38912, per=34.17%, avg=35468.80, stdev=1260.93, samples=20 00:28:53.226 iops : min= 254, max= 304, avg=277.10, stdev= 9.85, samples=20 00:28:53.226 lat (msec) : 10=17.31%, 20=82.40%, 50=0.04%, 100=0.25% 00:28:53.226 cpu : usr=94.73%, sys=4.94%, ctx=21, majf=0, minf=133 00:28:53.226 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:53.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.227 issued rwts: total=2773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.227 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:53.227 filename0: (groupid=0, jobs=1): err= 0: pid=183898: Mon Jul 15 22:45:15 2024 00:28:53.227 read: IOPS=261, BW=32.6MiB/s (34.2MB/s)(328MiB/10044msec) 00:28:53.227 slat (nsec): min=6514, max=29840, avg=12023.85, stdev=2120.40 00:28:53.227 clat (usec): min=6723, max=53945, avg=11461.19, stdev=3066.29 00:28:53.227 lat (usec): min=6730, max=53957, avg=11473.22, stdev=3066.24 00:28:53.227 clat percentiles (usec): 00:28:53.227 | 1.00th=[ 8848], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:28:53.227 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11469], 00:28:53.227 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12518], 95.00th=[12780], 00:28:53.227 | 99.00th=[13829], 99.50th=[45876], 99.90th=[53216], 99.95th=[53740], 00:28:53.227 | 99.99th=[53740] 00:28:53.227 bw ( KiB/s): min=24832, max=34816, per=32.31%, avg=33536.00, stdev=2140.24, samples=20 00:28:53.227 iops : min= 194, max= 272, avg=262.00, stdev=16.72, samples=20 00:28:53.227 lat (msec) : 10=7.02%, 20=92.45%, 50=0.08%, 100=0.46% 00:28:53.227 cpu : usr=94.76%, sys=4.89%, ctx=23, majf=0, minf=139 00:28:53.227 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:53.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.227 issued rwts: total=2622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.227 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:53.227 filename0: (groupid=0, jobs=1): err= 0: pid=183899: Mon Jul 15 22:45:15 2024 00:28:53.227 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(344MiB/10043msec) 00:28:53.227 slat (nsec): min=6512, max=29301, avg=11753.88, stdev=1904.42 00:28:53.227 clat (usec): min=6390, max=49607, avg=10931.01, stdev=1361.60 00:28:53.227 lat (usec): min=6402, max=49618, avg=10942.76, stdev=1361.58 00:28:53.227 clat percentiles (usec): 00:28:53.227 | 1.00th=[ 7701], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10290], 00:28:53.227 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:28:53.227 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:28:53.227 | 99.00th=[12911], 99.50th=[13304], 99.90th=[14877], 99.95th=[46924], 00:28:53.227 | 99.99th=[49546] 00:28:53.227 bw ( KiB/s): min=34048, max=38144, per=33.88%, avg=35161.60, stdev=887.88, samples=20 00:28:53.227 iops : min= 266, max= 298, avg=274.70, stdev= 6.94, samples=20 00:28:53.227 lat (msec) : 10=12.51%, 20=87.41%, 50=0.07% 00:28:53.227 cpu : usr=94.22%, sys=5.45%, ctx=32, majf=0, minf=141 00:28:53.227 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:53.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.227 issued rwts: total=2749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.227 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:53.227 00:28:53.227 Run status group 0 (all jobs): 00:28:53.227 READ: bw=101MiB/s (106MB/s), 32.6MiB/s-34.5MiB/s (34.2MB/s-36.2MB/s), io=1018MiB (1067MB), run=10043-10044msec 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:53.227 00:28:53.227 real 0m11.104s 00:28:53.227 user 0m35.190s 00:28:53.227 sys 0m1.877s 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:53.227 22:45:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.227 ************************************ 00:28:53.227 END TEST fio_dif_digest 00:28:53.227 ************************************ 00:28:53.227 22:45:15 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:53.227 22:45:15 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:53.227 22:45:15 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:53.227 rmmod nvme_tcp 00:28:53.227 rmmod nvme_fabrics 00:28:53.227 rmmod nvme_keyring 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 175204 ']' 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 175204 00:28:53.227 22:45:15 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 175204 ']' 00:28:53.227 22:45:15 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 175204 00:28:53.227 22:45:15 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:28:53.227 22:45:15 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:53.227 22:45:15 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 175204 00:28:53.227 22:45:15 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:53.227 22:45:15 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:53.227 22:45:15 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 175204' 00:28:53.227 killing process with pid 175204 00:28:53.227 22:45:15 nvmf_dif -- common/autotest_common.sh@967 -- # kill 175204 00:28:53.227 22:45:15 nvmf_dif -- common/autotest_common.sh@972 -- # wait 175204 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:53.227 22:45:15 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:54.600 Waiting for block devices as requested 00:28:54.600 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:54.600 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:54.600 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:54.600 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:54.600 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:54.858 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:54.858 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:54.858 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:55.116 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:55.116 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:55.116 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:55.116 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:55.374 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:55.374 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:55.374 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:55.631 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:55.631 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:55.631 22:45:19 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:55.631 22:45:19 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:55.631 22:45:19 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:55.631 22:45:19 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:55.631 22:45:19 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.631 22:45:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:55.631 22:45:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.159 22:45:21 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:58.159 00:28:58.159 real 1m13.301s 00:28:58.159 user 7m8.554s 00:28:58.159 sys 0m18.377s 00:28:58.159 22:45:21 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:58.159 22:45:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:58.159 ************************************ 00:28:58.159 END TEST nvmf_dif 00:28:58.159 ************************************ 00:28:58.159 22:45:21 -- common/autotest_common.sh@1142 -- # return 0 00:28:58.159 22:45:21 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:58.159 22:45:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:58.159 22:45:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:58.159 22:45:21 -- common/autotest_common.sh@10 -- # set +x 00:28:58.159 ************************************ 00:28:58.159 START TEST nvmf_abort_qd_sizes 00:28:58.159 ************************************ 00:28:58.159 22:45:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:58.159 * Looking for test storage... 00:28:58.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:58.160 22:45:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:03.418 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:03.418 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:03.418 Found net devices under 0000:86:00.0: cvl_0_0 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:03.418 Found net devices under 0000:86:00.1: cvl_0_1 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:03.418 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:03.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:03.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:29:03.419 00:29:03.419 --- 10.0.0.2 ping statistics --- 00:29:03.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.419 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:03.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:03.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:29:03.419 00:29:03.419 --- 10.0.0.1 ping statistics --- 00:29:03.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:03.419 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:03.419 22:45:26 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:05.951 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:05.951 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:06.518 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=192019 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 192019 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 192019 ']' 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:06.776 22:45:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:06.776 [2024-07-15 22:45:30.594822] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:29:06.776 [2024-07-15 22:45:30.594863] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.776 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.776 [2024-07-15 22:45:30.651982] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.776 [2024-07-15 22:45:30.733735] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.776 [2024-07-15 22:45:30.733772] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.776 [2024-07-15 22:45:30.733779] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.776 [2024-07-15 22:45:30.733786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.776 [2024-07-15 22:45:30.733791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.776 [2024-07-15 22:45:30.733843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.776 [2024-07-15 22:45:30.733859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.776 [2024-07-15 22:45:30.733971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.776 [2024-07-15 22:45:30.733973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.708 22:45:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:07.708 22:45:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:29:07.708 22:45:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:07.708 22:45:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:07.708 22:45:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:07.708 22:45:31 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.708 22:45:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:07.708 22:45:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:07.708 22:45:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:07.708 22:45:31 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:07.708 22:45:31 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:07.708 22:45:31 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.709 22:45:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:07.709 ************************************ 00:29:07.709 START TEST spdk_target_abort 00:29:07.709 ************************************ 00:29:07.709 22:45:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:29:07.709 22:45:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:07.709 22:45:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:29:07.709 22:45:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.709 22:45:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.002 spdk_targetn1 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.002 [2024-07-15 22:45:34.308258] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:11.002 [2024-07-15 22:45:34.337277] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:11.002 22:45:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:11.002 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.524 Initializing NVMe Controllers 00:29:13.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:13.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:13.524 Initialization complete. Launching workers. 00:29:13.524 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14314, failed: 0 00:29:13.524 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1543, failed to submit 12771 00:29:13.524 success 798, unsuccess 745, failed 0 00:29:13.524 22:45:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:13.524 22:45:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:13.781 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.117 Initializing NVMe Controllers 00:29:17.117 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:17.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:17.117 Initialization complete. Launching workers. 00:29:17.117 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8446, failed: 0 00:29:17.117 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1236, failed to submit 7210 00:29:17.117 success 332, unsuccess 904, failed 0 00:29:17.117 22:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:17.117 22:45:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:17.117 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.392 Initializing NVMe Controllers 00:29:20.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:20.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:20.392 Initialization complete. Launching workers. 00:29:20.392 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37884, failed: 0 00:29:20.392 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2881, failed to submit 35003 00:29:20.392 success 588, unsuccess 2293, failed 0 00:29:20.392 22:45:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:20.392 22:45:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.392 22:45:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:20.392 22:45:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.392 22:45:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:20.392 22:45:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.392 22:45:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 192019 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 192019 ']' 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 192019 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 192019 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 192019' 00:29:21.762 killing process with pid 192019 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 192019 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 192019 00:29:21.762 00:29:21.762 real 0m14.096s 00:29:21.762 user 0m56.098s 00:29:21.762 sys 0m2.334s 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.762 ************************************ 00:29:21.762 END TEST spdk_target_abort 00:29:21.762 ************************************ 00:29:21.762 22:45:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:21.762 22:45:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:21.762 22:45:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:21.762 22:45:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:21.762 22:45:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:21.762 ************************************ 00:29:21.762 START TEST kernel_target_abort 00:29:21.762 ************************************ 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:21.762 22:45:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:24.288 Waiting for block devices as requested 00:29:24.288 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:24.288 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:24.288 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:24.288 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:24.288 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:24.288 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:24.288 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:24.288 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:24.546 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:24.546 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:24.546 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:24.804 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:24.804 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:24.804 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:24.804 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:25.062 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:25.062 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:25.062 22:45:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:25.062 22:45:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:25.062 22:45:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:25.062 22:45:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:25.062 22:45:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:25.062 22:45:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:25.062 22:45:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:25.062 22:45:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:25.062 22:45:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:25.319 No valid GPT data, bailing 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:25.319 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:25.320 00:29:25.320 Discovery Log Number of Records 2, Generation counter 2 00:29:25.320 =====Discovery Log Entry 0====== 00:29:25.320 trtype: tcp 00:29:25.320 adrfam: ipv4 00:29:25.320 subtype: current discovery subsystem 00:29:25.320 treq: not specified, sq flow control disable supported 00:29:25.320 portid: 1 00:29:25.320 trsvcid: 4420 00:29:25.320 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:25.320 traddr: 10.0.0.1 00:29:25.320 eflags: none 00:29:25.320 sectype: none 00:29:25.320 =====Discovery Log Entry 1====== 00:29:25.320 trtype: tcp 00:29:25.320 adrfam: ipv4 00:29:25.320 subtype: nvme subsystem 00:29:25.320 treq: not specified, sq flow control disable supported 00:29:25.320 portid: 1 00:29:25.320 trsvcid: 4420 00:29:25.320 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:25.320 traddr: 10.0.0.1 00:29:25.320 eflags: none 00:29:25.320 sectype: none 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:25.320 22:45:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:25.320 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.587 Initializing NVMe Controllers 00:29:28.587 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:28.587 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:28.587 Initialization complete. Launching workers. 00:29:28.587 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75512, failed: 0 00:29:28.587 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 75512, failed to submit 0 00:29:28.587 success 0, unsuccess 75512, failed 0 00:29:28.587 22:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:28.587 22:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:28.587 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.859 Initializing NVMe Controllers 00:29:31.859 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:31.859 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:31.859 Initialization complete. Launching workers. 00:29:31.859 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 126292, failed: 0 00:29:31.859 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31838, failed to submit 94454 00:29:31.859 success 0, unsuccess 31838, failed 0 00:29:31.859 22:45:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:31.859 22:45:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:31.859 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.137 Initializing NVMe Controllers 00:29:35.137 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:35.137 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:35.137 Initialization complete. Launching workers. 00:29:35.137 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 121354, failed: 0 00:29:35.137 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30366, failed to submit 90988 00:29:35.137 success 0, unsuccess 30366, failed 0 00:29:35.137 22:45:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:35.137 22:45:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:35.137 22:45:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:29:35.137 22:45:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:35.137 22:45:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:35.137 22:45:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:35.137 22:45:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:35.137 22:45:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:29:35.137 22:45:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:29:35.137 22:45:58 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:37.036 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:37.036 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:37.970 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:37.970 00:29:37.970 real 0m16.124s 00:29:37.970 user 0m7.314s 00:29:37.970 sys 0m4.548s 00:29:37.970 22:46:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:37.970 22:46:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:37.970 ************************************ 00:29:37.970 END TEST kernel_target_abort 00:29:37.970 ************************************ 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:37.970 rmmod nvme_tcp 00:29:37.970 rmmod nvme_fabrics 00:29:37.970 rmmod nvme_keyring 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 192019 ']' 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 192019 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 192019 ']' 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 192019 00:29:37.970 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (192019) - No such process 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 192019 is not found' 00:29:37.970 Process with pid 192019 is not found 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:37.970 22:46:01 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:40.496 Waiting for block devices as requested 00:29:40.496 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:40.496 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:40.753 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:40.753 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:40.753 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:40.753 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:41.012 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:41.012 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:41.012 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:41.012 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:41.270 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:41.270 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:41.270 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:41.531 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:41.531 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:41.531 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:41.531 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:41.810 22:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:41.810 22:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:41.810 22:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:41.810 22:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:41.810 22:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.810 22:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:41.810 22:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.711 22:46:07 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:43.711 00:29:43.711 real 0m45.986s 00:29:43.711 user 1m7.289s 00:29:43.711 sys 0m14.741s 00:29:43.711 22:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:43.711 22:46:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:43.711 ************************************ 00:29:43.711 END TEST nvmf_abort_qd_sizes 00:29:43.711 ************************************ 00:29:43.711 22:46:07 -- common/autotest_common.sh@1142 -- # return 0 00:29:43.711 22:46:07 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:43.711 22:46:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:43.711 22:46:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.711 22:46:07 -- common/autotest_common.sh@10 -- # set +x 00:29:43.970 ************************************ 00:29:43.970 START TEST keyring_file 00:29:43.970 ************************************ 00:29:43.970 22:46:07 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:43.970 * Looking for test storage... 00:29:43.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:43.970 22:46:07 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:43.970 22:46:07 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.970 22:46:07 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.970 22:46:07 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.970 22:46:07 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.970 22:46:07 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.970 22:46:07 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.970 22:46:07 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.971 22:46:07 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.971 22:46:07 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:43.971 22:46:07 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@47 -- # : 0 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:43.971 22:46:07 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:43.971 22:46:07 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:43.971 22:46:07 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:43.971 22:46:07 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:43.971 22:46:07 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:43.971 22:46:07 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UB0zDUjCOu 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UB0zDUjCOu 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UB0zDUjCOu 00:29:43.971 22:46:07 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.UB0zDUjCOu 00:29:43.971 22:46:07 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tWSwKa1klT 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:43.971 22:46:07 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tWSwKa1klT 00:29:43.971 22:46:07 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tWSwKa1klT 00:29:43.971 22:46:07 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.tWSwKa1klT 00:29:43.971 22:46:07 keyring_file -- keyring/file.sh@30 -- # tgtpid=200654 00:29:43.971 22:46:07 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:43.971 22:46:07 keyring_file -- keyring/file.sh@32 -- # waitforlisten 200654 00:29:43.971 22:46:07 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 200654 ']' 00:29:43.971 22:46:07 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.971 22:46:07 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:43.971 22:46:07 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.971 22:46:07 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:43.971 22:46:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:44.229 [2024-07-15 22:46:07.973036] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:29:44.229 [2024-07-15 22:46:07.973084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200654 ] 00:29:44.229 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.229 [2024-07-15 22:46:08.027636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.229 [2024-07-15 22:46:08.103578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.794 22:46:08 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:44.794 22:46:08 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:44.794 22:46:08 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:44.794 22:46:08 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:44.794 22:46:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:44.794 [2024-07-15 22:46:08.763837] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.053 null0 00:29:45.053 [2024-07-15 22:46:08.795891] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:45.053 [2024-07-15 22:46:08.796073] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:45.053 [2024-07-15 22:46:08.803899] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:45.053 22:46:08 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:45.053 [2024-07-15 22:46:08.819944] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:45.053 request: 00:29:45.053 { 00:29:45.053 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:45.053 "secure_channel": false, 00:29:45.053 "listen_address": { 00:29:45.053 "trtype": "tcp", 00:29:45.053 "traddr": "127.0.0.1", 00:29:45.053 "trsvcid": "4420" 00:29:45.053 }, 00:29:45.053 "method": "nvmf_subsystem_add_listener", 00:29:45.053 "req_id": 1 00:29:45.053 } 00:29:45.053 Got JSON-RPC error response 00:29:45.053 response: 00:29:45.053 { 00:29:45.053 "code": -32602, 00:29:45.053 "message": "Invalid parameters" 00:29:45.053 } 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:45.053 22:46:08 keyring_file -- keyring/file.sh@46 -- # bperfpid=200669 00:29:45.053 22:46:08 keyring_file -- keyring/file.sh@48 -- # waitforlisten 200669 /var/tmp/bperf.sock 00:29:45.053 22:46:08 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 200669 ']' 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:45.053 22:46:08 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:45.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:45.054 22:46:08 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:45.054 22:46:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:45.054 [2024-07-15 22:46:08.871740] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:29:45.054 [2024-07-15 22:46:08.871783] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200669 ] 00:29:45.054 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.054 [2024-07-15 22:46:08.926471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.054 [2024-07-15 22:46:09.005715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.985 22:46:09 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:45.985 22:46:09 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:45.985 22:46:09 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UB0zDUjCOu 00:29:45.985 22:46:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UB0zDUjCOu 00:29:45.985 22:46:09 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tWSwKa1klT 00:29:45.985 22:46:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tWSwKa1klT 00:29:46.243 22:46:10 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:29:46.243 22:46:10 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:29:46.243 22:46:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:46.243 22:46:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:46.243 22:46:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:46.501 22:46:10 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.UB0zDUjCOu == \/\t\m\p\/\t\m\p\.\U\B\0\z\D\U\j\C\O\u ]] 00:29:46.501 22:46:10 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:29:46.501 22:46:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:46.501 22:46:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:46.501 22:46:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:46.501 22:46:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:46.501 22:46:10 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.tWSwKa1klT == \/\t\m\p\/\t\m\p\.\t\W\S\w\K\a\1\k\l\T ]] 00:29:46.501 22:46:10 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:29:46.501 22:46:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:46.501 22:46:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:46.501 22:46:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:46.501 22:46:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:46.501 22:46:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:46.758 22:46:10 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:46.758 22:46:10 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:29:46.758 22:46:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:46.758 22:46:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:46.758 22:46:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:46.758 22:46:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:46.758 22:46:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:47.017 22:46:10 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:47.017 22:46:10 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:47.017 22:46:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:47.017 [2024-07-15 22:46:10.943766] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:47.273 nvme0n1 00:29:47.273 22:46:11 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:29:47.273 22:46:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:47.273 22:46:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:47.273 22:46:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:47.273 22:46:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:47.273 22:46:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:47.273 22:46:11 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:47.273 22:46:11 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:29:47.273 22:46:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:47.273 22:46:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:47.273 22:46:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:47.273 22:46:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:47.273 22:46:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:47.531 22:46:11 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:47.531 22:46:11 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:47.531 Running I/O for 1 seconds... 00:29:48.895 00:29:48.895 Latency(us) 00:29:48.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.895 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:48.896 nvme0n1 : 1.01 13062.40 51.02 0.00 0.00 9768.48 5812.76 17552.25 00:29:48.896 =================================================================================================================== 00:29:48.896 Total : 13062.40 51.02 0.00 0.00 9768.48 5812.76 17552.25 00:29:48.896 0 00:29:48.896 22:46:12 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:48.896 22:46:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:48.896 22:46:12 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:29:48.896 22:46:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:48.896 22:46:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:48.896 22:46:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:48.896 22:46:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:48.896 22:46:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:48.896 22:46:12 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:48.896 22:46:12 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:29:48.896 22:46:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:48.896 22:46:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:48.896 22:46:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:48.896 22:46:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:48.896 22:46:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:49.152 22:46:13 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:49.152 22:46:13 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:49.152 22:46:13 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:49.152 22:46:13 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:49.152 22:46:13 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:49.152 22:46:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:49.152 22:46:13 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:49.152 22:46:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:49.152 22:46:13 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:49.152 22:46:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:49.408 [2024-07-15 22:46:13.213484] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:49.408 [2024-07-15 22:46:13.214182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f9820 (107): Transport endpoint is not connected 00:29:49.408 [2024-07-15 22:46:13.215177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f9820 (9): Bad file descriptor 00:29:49.408 [2024-07-15 22:46:13.216177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:49.408 [2024-07-15 22:46:13.216187] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:49.408 [2024-07-15 22:46:13.216194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:49.408 request: 00:29:49.408 { 00:29:49.408 "name": "nvme0", 00:29:49.408 "trtype": "tcp", 00:29:49.408 "traddr": "127.0.0.1", 00:29:49.408 "adrfam": "ipv4", 00:29:49.408 "trsvcid": "4420", 00:29:49.408 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:49.408 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:49.408 "prchk_reftag": false, 00:29:49.408 "prchk_guard": false, 00:29:49.408 "hdgst": false, 00:29:49.408 "ddgst": false, 00:29:49.408 "psk": "key1", 00:29:49.408 "method": "bdev_nvme_attach_controller", 00:29:49.408 "req_id": 1 00:29:49.408 } 00:29:49.408 Got JSON-RPC error response 00:29:49.408 response: 00:29:49.408 { 00:29:49.408 "code": -5, 00:29:49.408 "message": "Input/output error" 00:29:49.408 } 00:29:49.408 22:46:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:49.408 22:46:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:49.408 22:46:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:49.408 22:46:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:49.408 22:46:13 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:29:49.408 22:46:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:49.408 22:46:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:49.408 22:46:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:49.408 22:46:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:49.408 22:46:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:49.664 22:46:13 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:49.664 22:46:13 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:29:49.664 22:46:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:49.664 22:46:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:49.664 22:46:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:49.664 22:46:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:49.664 22:46:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:49.664 22:46:13 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:49.664 22:46:13 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:49.664 22:46:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:49.921 22:46:13 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:49.921 22:46:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:50.177 22:46:13 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:50.177 22:46:13 keyring_file -- keyring/file.sh@77 -- # jq length 00:29:50.177 22:46:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.177 22:46:14 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:50.177 22:46:14 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.UB0zDUjCOu 00:29:50.177 22:46:14 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.UB0zDUjCOu 00:29:50.177 22:46:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:50.177 22:46:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.UB0zDUjCOu 00:29:50.177 22:46:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:50.177 22:46:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:50.177 22:46:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:50.177 22:46:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:50.177 22:46:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UB0zDUjCOu 00:29:50.177 22:46:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UB0zDUjCOu 00:29:50.432 [2024-07-15 22:46:14.257208] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UB0zDUjCOu': 0100660 00:29:50.432 [2024-07-15 22:46:14.257245] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:50.432 request: 00:29:50.432 { 00:29:50.432 "name": "key0", 00:29:50.432 "path": "/tmp/tmp.UB0zDUjCOu", 00:29:50.432 "method": "keyring_file_add_key", 00:29:50.432 "req_id": 1 00:29:50.432 } 00:29:50.432 Got JSON-RPC error response 00:29:50.432 response: 00:29:50.432 { 00:29:50.432 "code": -1, 00:29:50.432 "message": "Operation not permitted" 00:29:50.432 } 00:29:50.432 22:46:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:50.432 22:46:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:50.432 22:46:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:50.432 22:46:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:50.432 22:46:14 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.UB0zDUjCOu 00:29:50.432 22:46:14 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UB0zDUjCOu 00:29:50.432 22:46:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UB0zDUjCOu 00:29:50.688 22:46:14 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.UB0zDUjCOu 00:29:50.688 22:46:14 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:29:50.688 22:46:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:50.688 22:46:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:50.688 22:46:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:50.688 22:46:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:50.688 22:46:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:50.688 22:46:14 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:50.688 22:46:14 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:50.688 22:46:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:29:50.688 22:46:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:50.688 22:46:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:50.688 22:46:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:50.688 22:46:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:50.688 22:46:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:50.688 22:46:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:50.688 22:46:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:50.945 [2024-07-15 22:46:14.790625] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.UB0zDUjCOu': No such file or directory 00:29:50.945 [2024-07-15 22:46:14.790647] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:50.945 [2024-07-15 22:46:14.790669] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:50.945 [2024-07-15 22:46:14.790675] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:50.945 [2024-07-15 22:46:14.790681] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:50.945 request: 00:29:50.945 { 00:29:50.945 "name": "nvme0", 00:29:50.945 "trtype": "tcp", 00:29:50.945 "traddr": "127.0.0.1", 00:29:50.945 "adrfam": "ipv4", 00:29:50.945 "trsvcid": "4420", 00:29:50.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:50.945 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:50.945 "prchk_reftag": false, 00:29:50.945 "prchk_guard": false, 00:29:50.945 "hdgst": false, 00:29:50.945 "ddgst": false, 00:29:50.945 "psk": "key0", 00:29:50.945 "method": "bdev_nvme_attach_controller", 00:29:50.945 "req_id": 1 00:29:50.945 } 00:29:50.945 Got JSON-RPC error response 00:29:50.945 response: 00:29:50.945 { 00:29:50.945 "code": -19, 00:29:50.945 "message": "No such device" 00:29:50.945 } 00:29:50.945 22:46:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:29:50.945 22:46:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:50.945 22:46:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:50.945 22:46:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:50.945 22:46:14 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:50.945 22:46:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:51.201 22:46:14 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:51.201 22:46:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:51.201 22:46:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:51.201 22:46:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:51.201 22:46:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:51.201 22:46:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:51.201 22:46:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.h8OhgzcQaC 00:29:51.201 22:46:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:51.201 22:46:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:51.201 22:46:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:29:51.201 22:46:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:51.201 22:46:14 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:51.201 22:46:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:29:51.201 22:46:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:29:51.201 22:46:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.h8OhgzcQaC 00:29:51.201 22:46:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.h8OhgzcQaC 00:29:51.201 22:46:15 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.h8OhgzcQaC 00:29:51.201 22:46:15 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h8OhgzcQaC 00:29:51.201 22:46:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h8OhgzcQaC 00:29:51.458 22:46:15 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:51.458 22:46:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:51.715 nvme0n1 00:29:51.715 22:46:15 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:29:51.715 22:46:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:51.715 22:46:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:51.715 22:46:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:51.715 22:46:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:51.715 22:46:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:51.715 22:46:15 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:51.715 22:46:15 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:51.715 22:46:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:51.971 22:46:15 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:29:51.971 22:46:15 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:29:51.971 22:46:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:51.971 22:46:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:51.971 22:46:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:52.229 22:46:15 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:52.229 22:46:15 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:52.229 22:46:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:52.229 22:46:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:52.229 22:46:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:52.229 22:46:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:52.229 22:46:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:52.229 22:46:16 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:52.229 22:46:16 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:52.229 22:46:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:52.486 22:46:16 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:52.486 22:46:16 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:52.486 22:46:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:52.742 22:46:16 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:52.742 22:46:16 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.h8OhgzcQaC 00:29:52.742 22:46:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.h8OhgzcQaC 00:29:52.742 22:46:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tWSwKa1klT 00:29:52.742 22:46:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tWSwKa1klT 00:29:52.999 22:46:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:52.999 22:46:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:53.256 nvme0n1 00:29:53.256 22:46:17 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:53.256 22:46:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:53.514 22:46:17 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:53.514 "subsystems": [ 00:29:53.514 { 00:29:53.514 "subsystem": "keyring", 00:29:53.514 "config": [ 00:29:53.514 { 00:29:53.514 "method": "keyring_file_add_key", 00:29:53.514 "params": { 00:29:53.514 "name": "key0", 00:29:53.514 "path": "/tmp/tmp.h8OhgzcQaC" 00:29:53.514 } 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "method": "keyring_file_add_key", 00:29:53.514 "params": { 00:29:53.514 "name": "key1", 00:29:53.514 "path": "/tmp/tmp.tWSwKa1klT" 00:29:53.514 } 00:29:53.514 } 00:29:53.514 ] 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "subsystem": "iobuf", 00:29:53.514 "config": [ 00:29:53.514 { 00:29:53.514 "method": "iobuf_set_options", 00:29:53.514 "params": { 00:29:53.514 "small_pool_count": 8192, 00:29:53.514 "large_pool_count": 1024, 00:29:53.514 "small_bufsize": 8192, 00:29:53.514 "large_bufsize": 135168 00:29:53.514 } 00:29:53.514 } 00:29:53.514 ] 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "subsystem": "sock", 00:29:53.514 "config": [ 00:29:53.514 { 00:29:53.514 "method": "sock_set_default_impl", 00:29:53.514 "params": { 00:29:53.514 "impl_name": "posix" 00:29:53.514 } 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "method": "sock_impl_set_options", 00:29:53.514 "params": { 00:29:53.514 "impl_name": "ssl", 00:29:53.514 "recv_buf_size": 4096, 00:29:53.514 "send_buf_size": 4096, 00:29:53.514 "enable_recv_pipe": true, 00:29:53.514 "enable_quickack": false, 00:29:53.514 "enable_placement_id": 0, 00:29:53.514 "enable_zerocopy_send_server": true, 00:29:53.514 "enable_zerocopy_send_client": false, 00:29:53.514 "zerocopy_threshold": 0, 00:29:53.514 "tls_version": 0, 00:29:53.514 "enable_ktls": false 00:29:53.514 } 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "method": "sock_impl_set_options", 00:29:53.514 "params": { 00:29:53.514 "impl_name": "posix", 00:29:53.514 "recv_buf_size": 2097152, 00:29:53.514 "send_buf_size": 2097152, 00:29:53.514 "enable_recv_pipe": true, 00:29:53.514 "enable_quickack": false, 00:29:53.514 "enable_placement_id": 0, 00:29:53.514 "enable_zerocopy_send_server": true, 00:29:53.514 "enable_zerocopy_send_client": false, 00:29:53.514 "zerocopy_threshold": 0, 00:29:53.514 "tls_version": 0, 00:29:53.514 "enable_ktls": false 00:29:53.514 } 00:29:53.514 } 00:29:53.514 ] 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "subsystem": "vmd", 00:29:53.514 "config": [] 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "subsystem": "accel", 00:29:53.514 "config": [ 00:29:53.514 { 00:29:53.514 "method": "accel_set_options", 00:29:53.514 "params": { 00:29:53.514 "small_cache_size": 128, 00:29:53.514 "large_cache_size": 16, 00:29:53.514 "task_count": 2048, 00:29:53.514 "sequence_count": 2048, 00:29:53.514 "buf_count": 2048 00:29:53.514 } 00:29:53.514 } 00:29:53.514 ] 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "subsystem": "bdev", 00:29:53.514 "config": [ 00:29:53.514 { 00:29:53.514 "method": "bdev_set_options", 00:29:53.514 "params": { 00:29:53.514 "bdev_io_pool_size": 65535, 00:29:53.514 "bdev_io_cache_size": 256, 00:29:53.514 "bdev_auto_examine": true, 00:29:53.514 "iobuf_small_cache_size": 128, 00:29:53.514 "iobuf_large_cache_size": 16 00:29:53.514 } 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "method": "bdev_raid_set_options", 00:29:53.514 "params": { 00:29:53.514 "process_window_size_kb": 1024 00:29:53.514 } 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "method": "bdev_iscsi_set_options", 00:29:53.514 "params": { 00:29:53.514 "timeout_sec": 30 00:29:53.514 } 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "method": "bdev_nvme_set_options", 00:29:53.514 "params": { 00:29:53.514 "action_on_timeout": "none", 00:29:53.514 "timeout_us": 0, 00:29:53.514 "timeout_admin_us": 0, 00:29:53.514 "keep_alive_timeout_ms": 10000, 00:29:53.514 "arbitration_burst": 0, 00:29:53.514 "low_priority_weight": 0, 00:29:53.514 "medium_priority_weight": 0, 00:29:53.514 "high_priority_weight": 0, 00:29:53.514 "nvme_adminq_poll_period_us": 10000, 00:29:53.514 "nvme_ioq_poll_period_us": 0, 00:29:53.514 "io_queue_requests": 512, 00:29:53.514 "delay_cmd_submit": true, 00:29:53.514 "transport_retry_count": 4, 00:29:53.514 "bdev_retry_count": 3, 00:29:53.514 "transport_ack_timeout": 0, 00:29:53.514 "ctrlr_loss_timeout_sec": 0, 00:29:53.514 "reconnect_delay_sec": 0, 00:29:53.514 "fast_io_fail_timeout_sec": 0, 00:29:53.514 "disable_auto_failback": false, 00:29:53.514 "generate_uuids": false, 00:29:53.514 "transport_tos": 0, 00:29:53.514 "nvme_error_stat": false, 00:29:53.514 "rdma_srq_size": 0, 00:29:53.514 "io_path_stat": false, 00:29:53.514 "allow_accel_sequence": false, 00:29:53.514 "rdma_max_cq_size": 0, 00:29:53.514 "rdma_cm_event_timeout_ms": 0, 00:29:53.514 "dhchap_digests": [ 00:29:53.514 "sha256", 00:29:53.514 "sha384", 00:29:53.514 "sha512" 00:29:53.514 ], 00:29:53.514 "dhchap_dhgroups": [ 00:29:53.514 "null", 00:29:53.514 "ffdhe2048", 00:29:53.514 "ffdhe3072", 00:29:53.514 "ffdhe4096", 00:29:53.514 "ffdhe6144", 00:29:53.514 "ffdhe8192" 00:29:53.514 ] 00:29:53.514 } 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "method": "bdev_nvme_attach_controller", 00:29:53.514 "params": { 00:29:53.514 "name": "nvme0", 00:29:53.514 "trtype": "TCP", 00:29:53.514 "adrfam": "IPv4", 00:29:53.514 "traddr": "127.0.0.1", 00:29:53.514 "trsvcid": "4420", 00:29:53.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:53.514 "prchk_reftag": false, 00:29:53.514 "prchk_guard": false, 00:29:53.514 "ctrlr_loss_timeout_sec": 0, 00:29:53.514 "reconnect_delay_sec": 0, 00:29:53.514 "fast_io_fail_timeout_sec": 0, 00:29:53.514 "psk": "key0", 00:29:53.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:53.514 "hdgst": false, 00:29:53.514 "ddgst": false 00:29:53.514 } 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "method": "bdev_nvme_set_hotplug", 00:29:53.514 "params": { 00:29:53.514 "period_us": 100000, 00:29:53.514 "enable": false 00:29:53.514 } 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "method": "bdev_wait_for_examine" 00:29:53.514 } 00:29:53.514 ] 00:29:53.514 }, 00:29:53.514 { 00:29:53.514 "subsystem": "nbd", 00:29:53.514 "config": [] 00:29:53.514 } 00:29:53.514 ] 00:29:53.514 }' 00:29:53.515 22:46:17 keyring_file -- keyring/file.sh@114 -- # killprocess 200669 00:29:53.515 22:46:17 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 200669 ']' 00:29:53.515 22:46:17 keyring_file -- common/autotest_common.sh@952 -- # kill -0 200669 00:29:53.515 22:46:17 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:53.515 22:46:17 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:53.515 22:46:17 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 200669 00:29:53.515 22:46:17 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:53.515 22:46:17 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:53.515 22:46:17 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 200669' 00:29:53.515 killing process with pid 200669 00:29:53.515 22:46:17 keyring_file -- common/autotest_common.sh@967 -- # kill 200669 00:29:53.515 Received shutdown signal, test time was about 1.000000 seconds 00:29:53.515 00:29:53.515 Latency(us) 00:29:53.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.515 =================================================================================================================== 00:29:53.515 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:53.515 22:46:17 keyring_file -- common/autotest_common.sh@972 -- # wait 200669 00:29:53.773 22:46:17 keyring_file -- keyring/file.sh@117 -- # bperfpid=202204 00:29:53.773 22:46:17 keyring_file -- keyring/file.sh@119 -- # waitforlisten 202204 /var/tmp/bperf.sock 00:29:53.773 22:46:17 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 202204 ']' 00:29:53.773 22:46:17 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:53.773 22:46:17 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:53.773 22:46:17 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:53.773 22:46:17 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:53.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:53.773 22:46:17 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:53.773 "subsystems": [ 00:29:53.773 { 00:29:53.773 "subsystem": "keyring", 00:29:53.773 "config": [ 00:29:53.773 { 00:29:53.773 "method": "keyring_file_add_key", 00:29:53.773 "params": { 00:29:53.773 "name": "key0", 00:29:53.773 "path": "/tmp/tmp.h8OhgzcQaC" 00:29:53.773 } 00:29:53.773 }, 00:29:53.773 { 00:29:53.773 "method": "keyring_file_add_key", 00:29:53.773 "params": { 00:29:53.773 "name": "key1", 00:29:53.773 "path": "/tmp/tmp.tWSwKa1klT" 00:29:53.773 } 00:29:53.773 } 00:29:53.773 ] 00:29:53.773 }, 00:29:53.773 { 00:29:53.773 "subsystem": "iobuf", 00:29:53.773 "config": [ 00:29:53.773 { 00:29:53.773 "method": "iobuf_set_options", 00:29:53.773 "params": { 00:29:53.773 "small_pool_count": 8192, 00:29:53.773 "large_pool_count": 1024, 00:29:53.773 "small_bufsize": 8192, 00:29:53.773 "large_bufsize": 135168 00:29:53.773 } 00:29:53.773 } 00:29:53.773 ] 00:29:53.773 }, 00:29:53.773 { 00:29:53.773 "subsystem": "sock", 00:29:53.773 "config": [ 00:29:53.773 { 00:29:53.773 "method": "sock_set_default_impl", 00:29:53.773 "params": { 00:29:53.773 "impl_name": "posix" 00:29:53.773 } 00:29:53.773 }, 00:29:53.773 { 00:29:53.773 "method": "sock_impl_set_options", 00:29:53.773 "params": { 00:29:53.773 "impl_name": "ssl", 00:29:53.773 "recv_buf_size": 4096, 00:29:53.773 "send_buf_size": 4096, 00:29:53.773 "enable_recv_pipe": true, 00:29:53.773 "enable_quickack": false, 00:29:53.773 "enable_placement_id": 0, 00:29:53.773 "enable_zerocopy_send_server": true, 00:29:53.773 "enable_zerocopy_send_client": false, 00:29:53.773 "zerocopy_threshold": 0, 00:29:53.773 "tls_version": 0, 00:29:53.773 "enable_ktls": false 00:29:53.773 } 00:29:53.773 }, 00:29:53.773 { 00:29:53.773 "method": "sock_impl_set_options", 00:29:53.773 "params": { 00:29:53.773 "impl_name": "posix", 00:29:53.773 "recv_buf_size": 2097152, 00:29:53.773 "send_buf_size": 2097152, 00:29:53.773 "enable_recv_pipe": true, 00:29:53.773 "enable_quickack": false, 00:29:53.773 "enable_placement_id": 0, 00:29:53.773 "enable_zerocopy_send_server": true, 00:29:53.773 "enable_zerocopy_send_client": false, 00:29:53.773 "zerocopy_threshold": 0, 00:29:53.773 "tls_version": 0, 00:29:53.773 "enable_ktls": false 00:29:53.773 } 00:29:53.773 } 00:29:53.773 ] 00:29:53.773 }, 00:29:53.773 { 00:29:53.773 "subsystem": "vmd", 00:29:53.773 "config": [] 00:29:53.773 }, 00:29:53.773 { 00:29:53.773 "subsystem": "accel", 00:29:53.773 "config": [ 00:29:53.773 { 00:29:53.773 "method": "accel_set_options", 00:29:53.773 "params": { 00:29:53.773 "small_cache_size": 128, 00:29:53.773 "large_cache_size": 16, 00:29:53.773 "task_count": 2048, 00:29:53.773 "sequence_count": 2048, 00:29:53.773 "buf_count": 2048 00:29:53.773 } 00:29:53.773 } 00:29:53.773 ] 00:29:53.773 }, 00:29:53.773 { 00:29:53.773 "subsystem": "bdev", 00:29:53.773 "config": [ 00:29:53.773 { 00:29:53.773 "method": "bdev_set_options", 00:29:53.773 "params": { 00:29:53.773 "bdev_io_pool_size": 65535, 00:29:53.773 "bdev_io_cache_size": 256, 00:29:53.773 "bdev_auto_examine": true, 00:29:53.773 "iobuf_small_cache_size": 128, 00:29:53.773 "iobuf_large_cache_size": 16 00:29:53.773 } 00:29:53.773 }, 00:29:53.773 { 00:29:53.773 "method": "bdev_raid_set_options", 00:29:53.773 "params": { 00:29:53.773 "process_window_size_kb": 1024 00:29:53.773 } 00:29:53.773 }, 00:29:53.773 { 00:29:53.773 "method": "bdev_iscsi_set_options", 00:29:53.773 "params": { 00:29:53.773 "timeout_sec": 30 00:29:53.773 } 00:29:53.773 }, 00:29:53.773 { 00:29:53.773 "method": "bdev_nvme_set_options", 00:29:53.773 "params": { 00:29:53.773 "action_on_timeout": "none", 00:29:53.773 "timeout_us": 0, 00:29:53.773 "timeout_admin_us": 0, 00:29:53.773 "keep_alive_timeout_ms": 10000, 00:29:53.773 "arbitration_burst": 0, 00:29:53.773 "low_priority_weight": 0, 00:29:53.773 "medium_priority_weight": 0, 00:29:53.773 "high_priority_weight": 0, 00:29:53.773 "nvme_adminq_poll_period_us": 10000, 00:29:53.773 "nvme_ioq_poll_period_us": 0, 00:29:53.773 "io_queue_requests": 512, 00:29:53.773 "delay_cmd_submit": true, 00:29:53.773 "transport_retry_count": 4, 00:29:53.773 "bdev_retry_count": 3, 00:29:53.773 "transport_ack_timeout": 0, 00:29:53.773 "ctrlr_loss_timeout_sec": 0, 00:29:53.773 "reconnect_delay_sec": 0, 00:29:53.773 "fast_io_fail_timeout_sec": 0, 00:29:53.773 "disable_auto_failback": false, 00:29:53.773 "generate_uuids": false, 00:29:53.773 "transport_tos": 0, 00:29:53.773 "nvme_error_stat": false, 00:29:53.773 "rdma_srq_size": 0, 00:29:53.773 "io_path_stat": false, 00:29:53.773 "allow_accel_sequence": false, 00:29:53.773 "rdma_max_cq_size": 0, 00:29:53.773 "rdma_cm_event_timeout_ms": 0, 00:29:53.773 "dhchap_digests": [ 00:29:53.773 "sha256", 00:29:53.773 "sha384", 00:29:53.773 "sha512" 00:29:53.773 ], 00:29:53.773 "dhchap_dhgroups": [ 00:29:53.773 "null", 00:29:53.773 "ffdhe2048", 00:29:53.773 "ffdhe3072", 00:29:53.773 "ffdhe4096", 00:29:53.773 "ffdhe6144", 00:29:53.773 "ffdhe8192" 00:29:53.773 ] 00:29:53.773 } 00:29:53.773 }, 00:29:53.773 { 00:29:53.773 "method": "bdev_nvme_attach_controller", 00:29:53.773 "params": { 00:29:53.773 "name": "nvme0", 00:29:53.773 "trtype": "TCP", 00:29:53.773 "adrfam": "IPv4", 00:29:53.773 "traddr": "127.0.0.1", 00:29:53.773 "trsvcid": "4420", 00:29:53.773 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:53.773 "prchk_reftag": false, 00:29:53.773 "prchk_guard": false, 00:29:53.773 "ctrlr_loss_timeout_sec": 0, 00:29:53.773 "reconnect_delay_sec": 0, 00:29:53.774 "fast_io_fail_timeout_sec": 0, 00:29:53.774 "psk": "key0", 00:29:53.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:53.774 "hdgst": false, 00:29:53.774 "ddgst": false 00:29:53.774 } 00:29:53.774 }, 00:29:53.774 { 00:29:53.774 "method": "bdev_nvme_set_hotplug", 00:29:53.774 "params": { 00:29:53.774 "period_us": 100000, 00:29:53.774 "enable": false 00:29:53.774 } 00:29:53.774 }, 00:29:53.774 { 00:29:53.774 "method": "bdev_wait_for_examine" 00:29:53.774 } 00:29:53.774 ] 00:29:53.774 }, 00:29:53.774 { 00:29:53.774 "subsystem": "nbd", 00:29:53.774 "config": [] 00:29:53.774 } 00:29:53.774 ] 00:29:53.774 }' 00:29:53.774 22:46:17 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:53.774 22:46:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:53.774 [2024-07-15 22:46:17.624978] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:29:53.774 [2024-07-15 22:46:17.625027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202204 ] 00:29:53.774 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.774 [2024-07-15 22:46:17.678695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.032 [2024-07-15 22:46:17.758843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.032 [2024-07-15 22:46:17.917177] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:54.596 22:46:18 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:54.596 22:46:18 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:54.596 22:46:18 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:54.596 22:46:18 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:54.596 22:46:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:54.853 22:46:18 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:54.853 22:46:18 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:54.853 22:46:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:54.853 22:46:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:54.853 22:46:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:54.853 22:46:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:54.853 22:46:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:54.853 22:46:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:54.853 22:46:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:54.853 22:46:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:54.853 22:46:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:54.853 22:46:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:54.853 22:46:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:54.853 22:46:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:55.111 22:46:18 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:55.111 22:46:18 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:55.111 22:46:18 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:55.111 22:46:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:55.369 22:46:19 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:55.369 22:46:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:55.369 22:46:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.h8OhgzcQaC /tmp/tmp.tWSwKa1klT 00:29:55.369 22:46:19 keyring_file -- keyring/file.sh@20 -- # killprocess 202204 00:29:55.369 22:46:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 202204 ']' 00:29:55.369 22:46:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 202204 00:29:55.369 22:46:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:55.369 22:46:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:55.369 22:46:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 202204 00:29:55.369 22:46:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:55.369 22:46:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:55.369 22:46:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 202204' 00:29:55.369 killing process with pid 202204 00:29:55.369 22:46:19 keyring_file -- common/autotest_common.sh@967 -- # kill 202204 00:29:55.369 Received shutdown signal, test time was about 1.000000 seconds 00:29:55.369 00:29:55.369 Latency(us) 00:29:55.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.369 =================================================================================================================== 00:29:55.369 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:55.369 22:46:19 keyring_file -- common/autotest_common.sh@972 -- # wait 202204 00:29:55.627 22:46:19 keyring_file -- keyring/file.sh@21 -- # killprocess 200654 00:29:55.627 22:46:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 200654 ']' 00:29:55.627 22:46:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 200654 00:29:55.627 22:46:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:55.627 22:46:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:55.627 22:46:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 200654 00:29:55.627 22:46:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:55.627 22:46:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:55.627 22:46:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 200654' 00:29:55.627 killing process with pid 200654 00:29:55.627 22:46:19 keyring_file -- common/autotest_common.sh@967 -- # kill 200654 00:29:55.627 [2024-07-15 22:46:19.420739] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:55.627 22:46:19 keyring_file -- common/autotest_common.sh@972 -- # wait 200654 00:29:55.884 00:29:55.884 real 0m12.032s 00:29:55.884 user 0m28.387s 00:29:55.884 sys 0m2.790s 00:29:55.884 22:46:19 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:55.884 22:46:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:55.884 ************************************ 00:29:55.884 END TEST keyring_file 00:29:55.884 ************************************ 00:29:55.884 22:46:19 -- common/autotest_common.sh@1142 -- # return 0 00:29:55.884 22:46:19 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:29:55.884 22:46:19 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:55.884 22:46:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:55.884 22:46:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:55.884 22:46:19 -- common/autotest_common.sh@10 -- # set +x 00:29:55.884 ************************************ 00:29:55.884 START TEST keyring_linux 00:29:55.884 ************************************ 00:29:55.885 22:46:19 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:56.143 * Looking for test storage... 00:29:56.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:56.143 22:46:19 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:56.143 22:46:19 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.143 22:46:19 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.143 22:46:19 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.143 22:46:19 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.143 22:46:19 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.143 22:46:19 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.143 22:46:19 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.143 22:46:19 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:56.143 22:46:19 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:56.143 22:46:19 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:56.144 22:46:19 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:56.144 22:46:19 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:56.144 22:46:19 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:56.144 22:46:19 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:56.144 22:46:19 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:56.144 22:46:19 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:56.144 /tmp/:spdk-test:key0 00:29:56.144 22:46:19 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:56.144 22:46:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:56.144 22:46:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:56.144 /tmp/:spdk-test:key1 00:29:56.144 22:46:19 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=202731 00:29:56.144 22:46:19 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:56.144 22:46:19 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 202731 00:29:56.144 22:46:19 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 202731 ']' 00:29:56.144 22:46:19 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.144 22:46:19 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:56.144 22:46:19 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.144 22:46:19 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:56.144 22:46:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:56.144 [2024-07-15 22:46:20.013349] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:29:56.144 [2024-07-15 22:46:20.013397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202731 ] 00:29:56.144 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.144 [2024-07-15 22:46:20.068946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.401 [2024-07-15 22:46:20.147370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.966 22:46:20 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:56.966 22:46:20 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:56.966 22:46:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:56.966 22:46:20 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:56.966 22:46:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:56.966 [2024-07-15 22:46:20.836115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.966 null0 00:29:56.966 [2024-07-15 22:46:20.868173] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:56.966 [2024-07-15 22:46:20.868515] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:56.966 22:46:20 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:56.966 22:46:20 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:56.966 890866104 00:29:56.966 22:46:20 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:56.966 350381416 00:29:56.966 22:46:20 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=202959 00:29:56.966 22:46:20 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 202959 /var/tmp/bperf.sock 00:29:56.966 22:46:20 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:56.966 22:46:20 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 202959 ']' 00:29:56.966 22:46:20 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:56.966 22:46:20 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:56.966 22:46:20 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:56.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:56.966 22:46:20 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:56.966 22:46:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:56.966 [2024-07-15 22:46:20.934221] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:29:56.966 [2024-07-15 22:46:20.934294] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202959 ] 00:29:57.224 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.224 [2024-07-15 22:46:20.987851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:57.224 [2024-07-15 22:46:21.066797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.789 22:46:21 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:57.789 22:46:21 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:57.789 22:46:21 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:57.789 22:46:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:58.046 22:46:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:58.046 22:46:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:58.304 22:46:22 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:58.304 22:46:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:58.561 [2024-07-15 22:46:22.287599] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:58.561 nvme0n1 00:29:58.561 22:46:22 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:58.561 22:46:22 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:58.561 22:46:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:58.562 22:46:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:58.562 22:46:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:58.562 22:46:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:58.835 22:46:22 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:58.835 22:46:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:58.835 22:46:22 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:58.835 22:46:22 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:58.835 22:46:22 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:58.835 22:46:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:58.835 22:46:22 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:58.835 22:46:22 keyring_linux -- keyring/linux.sh@25 -- # sn=890866104 00:29:58.835 22:46:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:58.835 22:46:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:58.835 22:46:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 890866104 == \8\9\0\8\6\6\1\0\4 ]] 00:29:58.835 22:46:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 890866104 00:29:58.835 22:46:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:58.835 22:46:22 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:59.105 Running I/O for 1 seconds... 00:30:00.055 00:30:00.055 Latency(us) 00:30:00.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.055 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:00.055 nvme0n1 : 1.01 12747.71 49.80 0.00 0.00 10000.35 4701.50 15272.74 00:30:00.055 =================================================================================================================== 00:30:00.055 Total : 12747.71 49.80 0.00 0.00 10000.35 4701.50 15272.74 00:30:00.055 0 00:30:00.055 22:46:23 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:00.055 22:46:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:00.312 22:46:24 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:00.312 22:46:24 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:00.312 22:46:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:00.312 22:46:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:00.312 22:46:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:00.312 22:46:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:00.312 22:46:24 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:00.312 22:46:24 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:00.312 22:46:24 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:00.312 22:46:24 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:00.312 22:46:24 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:30:00.312 22:46:24 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:00.312 22:46:24 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:00.312 22:46:24 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:00.312 22:46:24 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:00.312 22:46:24 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:00.312 22:46:24 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:00.312 22:46:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:00.570 [2024-07-15 22:46:24.370257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:00.570 [2024-07-15 22:46:24.370977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9bfd0 (107): Transport endpoint is not connected 00:30:00.570 [2024-07-15 22:46:24.371971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9bfd0 (9): Bad file descriptor 00:30:00.570 [2024-07-15 22:46:24.372972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:00.570 [2024-07-15 22:46:24.372981] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:00.570 [2024-07-15 22:46:24.372989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:00.570 request: 00:30:00.570 { 00:30:00.570 "name": "nvme0", 00:30:00.570 "trtype": "tcp", 00:30:00.570 "traddr": "127.0.0.1", 00:30:00.570 "adrfam": "ipv4", 00:30:00.570 "trsvcid": "4420", 00:30:00.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:00.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:00.570 "prchk_reftag": false, 00:30:00.570 "prchk_guard": false, 00:30:00.570 "hdgst": false, 00:30:00.570 "ddgst": false, 00:30:00.570 "psk": ":spdk-test:key1", 00:30:00.570 "method": "bdev_nvme_attach_controller", 00:30:00.570 "req_id": 1 00:30:00.570 } 00:30:00.570 Got JSON-RPC error response 00:30:00.570 response: 00:30:00.570 { 00:30:00.570 "code": -5, 00:30:00.570 "message": "Input/output error" 00:30:00.570 } 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@33 -- # sn=890866104 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 890866104 00:30:00.570 1 links removed 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@33 -- # sn=350381416 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 350381416 00:30:00.570 1 links removed 00:30:00.570 22:46:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 202959 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 202959 ']' 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 202959 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 202959 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 202959' 00:30:00.570 killing process with pid 202959 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@967 -- # kill 202959 00:30:00.570 Received shutdown signal, test time was about 1.000000 seconds 00:30:00.570 00:30:00.570 Latency(us) 00:30:00.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:00.570 =================================================================================================================== 00:30:00.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:00.570 22:46:24 keyring_linux -- common/autotest_common.sh@972 -- # wait 202959 00:30:00.828 22:46:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 202731 00:30:00.828 22:46:24 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 202731 ']' 00:30:00.828 22:46:24 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 202731 00:30:00.828 22:46:24 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:00.828 22:46:24 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:00.828 22:46:24 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 202731 00:30:00.828 22:46:24 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:00.828 22:46:24 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:00.828 22:46:24 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 202731' 00:30:00.828 killing process with pid 202731 00:30:00.828 22:46:24 keyring_linux -- common/autotest_common.sh@967 -- # kill 202731 00:30:00.828 22:46:24 keyring_linux -- common/autotest_common.sh@972 -- # wait 202731 00:30:01.085 00:30:01.085 real 0m5.187s 00:30:01.085 user 0m9.116s 00:30:01.085 sys 0m1.419s 00:30:01.085 22:46:24 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:01.085 22:46:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:01.085 ************************************ 00:30:01.085 END TEST keyring_linux 00:30:01.085 ************************************ 00:30:01.085 22:46:25 -- common/autotest_common.sh@1142 -- # return 0 00:30:01.085 22:46:25 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:01.085 22:46:25 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:01.085 22:46:25 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:01.085 22:46:25 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:01.085 22:46:25 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:01.085 22:46:25 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:01.085 22:46:25 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:01.085 22:46:25 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:01.085 22:46:25 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:01.085 22:46:25 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:01.085 22:46:25 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:01.085 22:46:25 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:01.085 22:46:25 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:01.085 22:46:25 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:01.085 22:46:25 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:01.085 22:46:25 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:01.085 22:46:25 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:01.085 22:46:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:01.085 22:46:25 -- common/autotest_common.sh@10 -- # set +x 00:30:01.085 22:46:25 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:01.085 22:46:25 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:01.085 22:46:25 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:01.085 22:46:25 -- common/autotest_common.sh@10 -- # set +x 00:30:06.345 INFO: APP EXITING 00:30:06.345 INFO: killing all VMs 00:30:06.345 INFO: killing vhost app 00:30:06.345 INFO: EXIT DONE 00:30:07.714 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:30:07.714 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:30:07.714 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:30:07.714 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:30:07.714 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:30:07.714 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:30:07.714 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:30:07.714 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:30:07.714 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:30:07.714 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:30:07.971 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:30:07.971 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:30:07.971 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:30:07.971 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:30:07.971 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:30:07.971 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:30:07.971 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:30:10.492 Cleaning 00:30:10.492 Removing: /var/run/dpdk/spdk0/config 00:30:10.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:10.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:10.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:10.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:10.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:10.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:10.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:10.492 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:10.492 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:10.492 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:10.492 Removing: /var/run/dpdk/spdk1/config 00:30:10.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:10.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:10.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:10.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:10.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:10.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:10.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:10.492 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:10.492 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:10.492 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:10.492 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:10.492 Removing: /var/run/dpdk/spdk2/config 00:30:10.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:10.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:10.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:10.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:10.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:10.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:10.492 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:10.750 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:10.750 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:10.750 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:10.750 Removing: /var/run/dpdk/spdk3/config 00:30:10.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:10.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:10.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:10.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:10.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:10.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:10.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:10.750 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:10.750 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:10.750 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:10.750 Removing: /var/run/dpdk/spdk4/config 00:30:10.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:10.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:10.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:10.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:10.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:10.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:10.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:10.750 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:10.750 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:10.750 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:10.750 Removing: /dev/shm/bdev_svc_trace.1 00:30:10.750 Removing: /dev/shm/nvmf_trace.0 00:30:10.750 Removing: /dev/shm/spdk_tgt_trace.pid4013134 00:30:10.750 Removing: /var/run/dpdk/spdk0 00:30:10.750 Removing: /var/run/dpdk/spdk1 00:30:10.750 Removing: /var/run/dpdk/spdk2 00:30:10.750 Removing: /var/run/dpdk/spdk3 00:30:10.750 Removing: /var/run/dpdk/spdk4 00:30:10.750 Removing: /var/run/dpdk/spdk_pid100354 00:30:10.750 Removing: /var/run/dpdk/spdk_pid105742 00:30:10.750 Removing: /var/run/dpdk/spdk_pid111078 00:30:10.750 Removing: /var/run/dpdk/spdk_pid119638 00:30:10.750 Removing: /var/run/dpdk/spdk_pid126622 00:30:10.750 Removing: /var/run/dpdk/spdk_pid126626 00:30:10.750 Removing: /var/run/dpdk/spdk_pid145186 00:30:10.750 Removing: /var/run/dpdk/spdk_pid145793 00:30:10.750 Removing: /var/run/dpdk/spdk_pid146364 00:30:10.750 Removing: /var/run/dpdk/spdk_pid147063 00:30:10.750 Removing: /var/run/dpdk/spdk_pid148027 00:30:10.750 Removing: /var/run/dpdk/spdk_pid148533 00:30:10.750 Removing: /var/run/dpdk/spdk_pid149212 00:30:10.750 Removing: /var/run/dpdk/spdk_pid149902 00:30:10.750 Removing: /var/run/dpdk/spdk_pid154155 00:30:10.750 Removing: /var/run/dpdk/spdk_pid154387 00:30:10.750 Removing: /var/run/dpdk/spdk_pid160244 00:30:10.750 Removing: /var/run/dpdk/spdk_pid160509 00:30:10.750 Removing: /var/run/dpdk/spdk_pid162730 00:30:10.750 Removing: /var/run/dpdk/spdk_pid170232 00:30:10.750 Removing: /var/run/dpdk/spdk_pid170237 00:30:10.750 Removing: /var/run/dpdk/spdk_pid175418 00:30:10.750 Removing: /var/run/dpdk/spdk_pid177232 00:30:10.750 Removing: /var/run/dpdk/spdk_pid179200 00:30:10.750 Removing: /var/run/dpdk/spdk_pid180447 00:30:10.750 Removing: /var/run/dpdk/spdk_pid182429 00:30:10.750 Removing: /var/run/dpdk/spdk_pid183613 00:30:10.750 Removing: /var/run/dpdk/spdk_pid192724 00:30:10.750 Removing: /var/run/dpdk/spdk_pid193186 00:30:10.750 Removing: /var/run/dpdk/spdk_pid193674 00:30:10.750 Removing: /var/run/dpdk/spdk_pid195917 00:30:10.750 Removing: /var/run/dpdk/spdk_pid196383 00:30:11.006 Removing: /var/run/dpdk/spdk_pid196851 00:30:11.006 Removing: /var/run/dpdk/spdk_pid200654 00:30:11.006 Removing: /var/run/dpdk/spdk_pid200669 00:30:11.006 Removing: /var/run/dpdk/spdk_pid202204 00:30:11.006 Removing: /var/run/dpdk/spdk_pid202731 00:30:11.006 Removing: /var/run/dpdk/spdk_pid202959 00:30:11.006 Removing: /var/run/dpdk/spdk_pid23424 00:30:11.006 Removing: /var/run/dpdk/spdk_pid27920 00:30:11.006 Removing: /var/run/dpdk/spdk_pid29520 00:30:11.006 Removing: /var/run/dpdk/spdk_pid31362 00:30:11.006 Removing: /var/run/dpdk/spdk_pid31600 00:30:11.006 Removing: /var/run/dpdk/spdk_pid31835 00:30:11.006 Removing: /var/run/dpdk/spdk_pid32048 00:30:11.006 Removing: /var/run/dpdk/spdk_pid32587 00:30:11.006 Removing: /var/run/dpdk/spdk_pid34414 00:30:11.006 Removing: /var/run/dpdk/spdk_pid35384 00:30:11.006 Removing: /var/run/dpdk/spdk_pid35685 00:30:11.006 Removing: /var/run/dpdk/spdk_pid38014 00:30:11.006 Removing: /var/run/dpdk/spdk_pid38543 00:30:11.006 Removing: /var/run/dpdk/spdk_pid39248 00:30:11.006 Removing: /var/run/dpdk/spdk_pid4010846 00:30:11.006 Removing: /var/run/dpdk/spdk_pid4012051 00:30:11.006 Removing: /var/run/dpdk/spdk_pid4013134 00:30:11.006 Removing: /var/run/dpdk/spdk_pid4013768 00:30:11.006 Removing: /var/run/dpdk/spdk_pid4014720 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4014957 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4015927 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4015961 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4016304 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4018076 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4019576 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4019860 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4020158 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4020583 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4020950 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4021170 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4021366 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4021634 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4022475 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4025361 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4025630 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4025900 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4026001 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4026489 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4026658 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4027001 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4027229 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4027492 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4027704 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4027782 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4027992 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4028534 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4028718 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4029004 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4029298 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4029377 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4029450 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4029697 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4029942 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4030205 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4030465 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4030729 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4030989 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4031248 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4031510 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4031764 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4032023 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4032287 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4032560 00:30:11.007 Removing: /var/run/dpdk/spdk_pid4032845 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4033110 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4033375 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4033646 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4033910 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4034165 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4034414 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4034666 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4034735 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4035041 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4038804 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4082108 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4086344 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4096348 00:30:11.262 Removing: /var/run/dpdk/spdk_pid4101523 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4105310 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4105986 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4112350 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4118730 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4118732 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4119587 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4120346 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4121264 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4121850 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4121951 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4122181 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4122196 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4122213 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4123114 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4124026 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4124947 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4125413 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4125421 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4125736 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4126894 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4128093 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4136414 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4136668 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4140912 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4146557 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4149185 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4160060 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4168964 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4170576 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4171500 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4188091 00:30:11.263 Removing: /var/run/dpdk/spdk_pid4191857 00:30:11.263 Removing: /var/run/dpdk/spdk_pid43535 00:30:11.263 Removing: /var/run/dpdk/spdk_pid53948 00:30:11.263 Removing: /var/run/dpdk/spdk_pid57784 00:30:11.263 Removing: /var/run/dpdk/spdk_pid63749 00:30:11.263 Removing: /var/run/dpdk/spdk_pid65065 00:30:11.263 Removing: /var/run/dpdk/spdk_pid66611 00:30:11.263 Removing: /var/run/dpdk/spdk_pid70896 00:30:11.263 Removing: /var/run/dpdk/spdk_pid74938 00:30:11.263 Removing: /var/run/dpdk/spdk_pid82344 00:30:11.263 Removing: /var/run/dpdk/spdk_pid82470 00:30:11.263 Removing: /var/run/dpdk/spdk_pid87003 00:30:11.263 Removing: /var/run/dpdk/spdk_pid87237 00:30:11.263 Removing: /var/run/dpdk/spdk_pid87465 00:30:11.263 Removing: /var/run/dpdk/spdk_pid87752 00:30:11.263 Removing: /var/run/dpdk/spdk_pid87882 00:30:11.263 Removing: /var/run/dpdk/spdk_pid92184 00:30:11.263 Removing: /var/run/dpdk/spdk_pid92754 00:30:11.263 Removing: /var/run/dpdk/spdk_pid97597 00:30:11.263 Clean 00:30:11.520 22:46:35 -- common/autotest_common.sh@1451 -- # return 0 00:30:11.520 22:46:35 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:11.520 22:46:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:11.520 22:46:35 -- common/autotest_common.sh@10 -- # set +x 00:30:11.520 22:46:35 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:11.520 22:46:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:11.520 22:46:35 -- common/autotest_common.sh@10 -- # set +x 00:30:11.520 22:46:35 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:11.520 22:46:35 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:11.520 22:46:35 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:11.520 22:46:35 -- spdk/autotest.sh@391 -- # hash lcov 00:30:11.520 22:46:35 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:11.520 22:46:35 -- spdk/autotest.sh@393 -- # hostname 00:30:11.520 22:46:35 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:11.777 geninfo: WARNING: invalid characters removed from testname! 00:30:33.689 22:46:55 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:33.948 22:46:57 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:35.912 22:46:59 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:37.814 22:47:01 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:39.715 22:47:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:41.609 22:47:05 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:42.980 22:47:06 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:42.980 22:47:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.980 22:47:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:42.980 22:47:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.980 22:47:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.980 22:47:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.980 22:47:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.980 22:47:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.980 22:47:06 -- paths/export.sh@5 -- $ export PATH 00:30:42.980 22:47:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.980 22:47:06 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:43.238 22:47:06 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:43.238 22:47:06 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721076426.XXXXXX 00:30:43.238 22:47:06 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721076426.SWf7B1 00:30:43.238 22:47:06 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:43.238 22:47:06 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:43.238 22:47:06 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:43.238 22:47:06 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:43.238 22:47:06 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:43.238 22:47:06 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:43.238 22:47:06 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:43.238 22:47:06 -- common/autotest_common.sh@10 -- $ set +x 00:30:43.239 22:47:06 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:43.239 22:47:06 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:43.239 22:47:06 -- pm/common@17 -- $ local monitor 00:30:43.239 22:47:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:43.239 22:47:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:43.239 22:47:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:43.239 22:47:06 -- pm/common@21 -- $ date +%s 00:30:43.239 22:47:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:43.239 22:47:06 -- pm/common@21 -- $ date +%s 00:30:43.239 22:47:06 -- pm/common@25 -- $ sleep 1 00:30:43.239 22:47:06 -- pm/common@21 -- $ date +%s 00:30:43.239 22:47:06 -- pm/common@21 -- $ date +%s 00:30:43.239 22:47:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721076426 00:30:43.239 22:47:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721076426 00:30:43.239 22:47:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721076426 00:30:43.239 22:47:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721076426 00:30:43.239 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721076426_collect-vmstat.pm.log 00:30:43.239 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721076426_collect-cpu-load.pm.log 00:30:43.239 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721076426_collect-cpu-temp.pm.log 00:30:43.239 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721076426_collect-bmc-pm.bmc.pm.log 00:30:44.172 22:47:07 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:44.172 22:47:07 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:30:44.172 22:47:07 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:44.172 22:47:07 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:44.172 22:47:07 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:44.172 22:47:07 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:44.172 22:47:07 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:44.172 22:47:07 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:44.172 22:47:07 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:44.172 22:47:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:44.172 22:47:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:44.172 22:47:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:44.172 22:47:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:44.172 22:47:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:44.172 22:47:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:44.172 22:47:08 -- pm/common@44 -- $ pid=212895 00:30:44.172 22:47:08 -- pm/common@50 -- $ kill -TERM 212895 00:30:44.172 22:47:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:44.172 22:47:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:44.172 22:47:08 -- pm/common@44 -- $ pid=212897 00:30:44.172 22:47:08 -- pm/common@50 -- $ kill -TERM 212897 00:30:44.172 22:47:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:44.172 22:47:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:44.172 22:47:08 -- pm/common@44 -- $ pid=212898 00:30:44.172 22:47:08 -- pm/common@50 -- $ kill -TERM 212898 00:30:44.172 22:47:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:44.172 22:47:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:44.172 22:47:08 -- pm/common@44 -- $ pid=212921 00:30:44.172 22:47:08 -- pm/common@50 -- $ sudo -E kill -TERM 212921 00:30:44.172 + [[ -n 3907481 ]] 00:30:44.172 + sudo kill 3907481 00:30:44.182 [Pipeline] } 00:30:44.202 [Pipeline] // stage 00:30:44.208 [Pipeline] } 00:30:44.259 [Pipeline] // timeout 00:30:44.267 [Pipeline] } 00:30:44.285 [Pipeline] // catchError 00:30:44.290 [Pipeline] } 00:30:44.308 [Pipeline] // wrap 00:30:44.315 [Pipeline] } 00:30:44.335 [Pipeline] // catchError 00:30:44.345 [Pipeline] stage 00:30:44.347 [Pipeline] { (Epilogue) 00:30:44.367 [Pipeline] catchError 00:30:44.369 [Pipeline] { 00:30:44.387 [Pipeline] echo 00:30:44.389 Cleanup processes 00:30:44.396 [Pipeline] sh 00:30:44.683 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:44.683 213019 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:44.683 213295 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:44.698 [Pipeline] sh 00:30:44.980 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:44.980 ++ grep -v 'sudo pgrep' 00:30:44.980 ++ awk '{print $1}' 00:30:44.980 + sudo kill -9 213019 00:30:44.993 [Pipeline] sh 00:30:45.277 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:55.324 [Pipeline] sh 00:30:55.609 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:55.609 Artifacts sizes are good 00:30:55.625 [Pipeline] archiveArtifacts 00:30:55.633 Archiving artifacts 00:30:55.785 [Pipeline] sh 00:30:56.070 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:56.088 [Pipeline] cleanWs 00:30:56.099 [WS-CLEANUP] Deleting project workspace... 00:30:56.099 [WS-CLEANUP] Deferred wipeout is used... 00:30:56.107 [WS-CLEANUP] done 00:30:56.110 [Pipeline] } 00:30:56.135 [Pipeline] // catchError 00:30:56.151 [Pipeline] sh 00:30:56.438 + logger -p user.info -t JENKINS-CI 00:30:56.448 [Pipeline] } 00:30:56.465 [Pipeline] // stage 00:30:56.471 [Pipeline] } 00:30:56.489 [Pipeline] // node 00:30:56.495 [Pipeline] End of Pipeline 00:30:56.519 Finished: SUCCESS